00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2033 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3293 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.177 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.280 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.280 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.183 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.194 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.207 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.207 > git config core.sparsecheckout # timeout=10 00:00:06.218 > git read-tree -mu HEAD # timeout=10 00:00:06.233 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.264 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.264 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.363 [Pipeline] Start of Pipeline 00:00:06.375 [Pipeline] library 00:00:06.377 Loading library shm_lib@master 00:00:06.377 Library shm_lib@master is cached. Copying from home. 00:00:06.391 [Pipeline] node 00:00:06.402 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.403 [Pipeline] { 00:00:06.414 [Pipeline] catchError 00:00:06.416 [Pipeline] { 00:00:06.435 [Pipeline] wrap 00:00:06.445 [Pipeline] { 00:00:06.455 [Pipeline] stage 00:00:06.457 [Pipeline] { (Prologue) 00:00:06.650 [Pipeline] sh 00:00:06.931 + logger -p user.info -t JENKINS-CI 00:00:06.946 [Pipeline] echo 00:00:06.947 Node: WFP8 00:00:06.955 [Pipeline] sh 00:00:07.247 [Pipeline] setCustomBuildProperty 00:00:07.258 [Pipeline] echo 00:00:07.259 Cleanup processes 00:00:07.263 [Pipeline] sh 00:00:07.539 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.539 333839 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.554 [Pipeline] sh 00:00:07.835 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.835 ++ grep -v 'sudo pgrep' 00:00:07.835 ++ awk '{print $1}' 00:00:07.835 + sudo kill -9 00:00:07.835 + true 00:00:07.846 [Pipeline] cleanWs 00:00:07.854 [WS-CLEANUP] Deleting project workspace... 00:00:07.854 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.859 [WS-CLEANUP] done 00:00:07.863 [Pipeline] setCustomBuildProperty 00:00:07.874 [Pipeline] sh 00:00:08.150 + sudo git config --global --replace-all safe.directory '*' 00:00:08.207 [Pipeline] httpRequest 00:00:08.236 [Pipeline] echo 00:00:08.238 Sorcerer 10.211.164.101 is alive 00:00:08.244 [Pipeline] httpRequest 00:00:08.249 HttpMethod: GET 00:00:08.249 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.250 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.271 Response Code: HTTP/1.1 200 OK 00:00:08.271 Success: Status code 200 is in the accepted range: 200,404 00:00:08.272 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:15.507 [Pipeline] sh 00:00:15.790 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:15.806 [Pipeline] httpRequest 00:00:15.839 [Pipeline] echo 00:00:15.841 Sorcerer 10.211.164.101 is alive 00:00:15.851 [Pipeline] httpRequest 00:00:15.855 HttpMethod: GET 00:00:15.856 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:15.856 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:15.869 Response Code: HTTP/1.1 200 OK 00:00:15.870 Success: Status code 200 is in the accepted range: 200,404 00:00:15.870 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:46.913 [Pipeline] sh 00:00:47.192 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:49.735 [Pipeline] sh 00:00:50.015 + git -C spdk log --oneline -n5 00:00:50.015 dbef7efac test: fix dpdk builds on ubuntu24 00:00:50.015 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:50.015 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:50.015 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:50.015 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:50.027 [Pipeline] } 00:00:50.044 [Pipeline] // stage 00:00:50.053 [Pipeline] stage 00:00:50.055 [Pipeline] { (Prepare) 00:00:50.073 [Pipeline] writeFile 00:00:50.091 [Pipeline] sh 00:00:50.373 + logger -p user.info -t JENKINS-CI 00:00:50.386 [Pipeline] sh 00:00:50.667 + logger -p user.info -t JENKINS-CI 00:00:50.714 [Pipeline] sh 00:00:50.995 + cat autorun-spdk.conf 00:00:50.995 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.995 SPDK_TEST_NVMF=1 00:00:50.995 SPDK_TEST_NVME_CLI=1 00:00:50.995 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.995 SPDK_TEST_NVMF_NICS=e810 00:00:50.995 SPDK_RUN_UBSAN=1 00:00:50.995 NET_TYPE=phy 00:00:51.001 RUN_NIGHTLY=1 00:00:51.006 [Pipeline] readFile 00:00:51.032 [Pipeline] withEnv 00:00:51.034 [Pipeline] { 00:00:51.048 [Pipeline] sh 00:00:51.332 + set -ex 00:00:51.332 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:51.332 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.332 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.332 ++ SPDK_TEST_NVMF=1 00:00:51.332 ++ SPDK_TEST_NVME_CLI=1 00:00:51.332 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.332 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.332 ++ SPDK_RUN_UBSAN=1 00:00:51.332 ++ NET_TYPE=phy 00:00:51.332 ++ RUN_NIGHTLY=1 00:00:51.332 + case $SPDK_TEST_NVMF_NICS in 00:00:51.332 + DRIVERS=ice 00:00:51.332 + [[ tcp == \r\d\m\a ]] 00:00:51.332 + [[ -n ice ]] 00:00:51.332 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:51.332 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:51.332 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:51.332 rmmod: ERROR: Module irdma is not currently loaded 00:00:51.332 rmmod: ERROR: Module i40iw is not currently loaded 00:00:51.332 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:51.332 + true 00:00:51.332 + for D in $DRIVERS 00:00:51.332 + sudo modprobe ice 00:00:51.332 + exit 0 00:00:51.341 [Pipeline] } 00:00:51.361 [Pipeline] // withEnv 00:00:51.366 [Pipeline] } 00:00:51.382 [Pipeline] // stage 00:00:51.392 [Pipeline] catchError 00:00:51.394 [Pipeline] { 00:00:51.408 [Pipeline] timeout 00:00:51.408 Timeout set to expire in 50 min 00:00:51.410 [Pipeline] { 00:00:51.425 [Pipeline] stage 00:00:51.427 [Pipeline] { (Tests) 00:00:51.443 [Pipeline] sh 00:00:51.724 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.724 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.725 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.725 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:51.725 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.725 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.725 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:51.725 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.725 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.725 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.725 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:51.725 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.725 + source /etc/os-release 00:00:51.725 ++ NAME='Fedora Linux' 00:00:51.725 ++ VERSION='38 (Cloud Edition)' 00:00:51.725 ++ ID=fedora 00:00:51.725 ++ VERSION_ID=38 00:00:51.725 ++ VERSION_CODENAME= 00:00:51.725 ++ PLATFORM_ID=platform:f38 00:00:51.725 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:51.725 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:51.725 ++ LOGO=fedora-logo-icon 00:00:51.725 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:51.725 ++ HOME_URL=https://fedoraproject.org/ 00:00:51.725 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:51.725 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:51.725 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:51.725 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:51.725 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:51.725 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:51.725 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:51.725 ++ SUPPORT_END=2024-05-14 00:00:51.725 ++ VARIANT='Cloud Edition' 00:00:51.725 ++ VARIANT_ID=cloud 00:00:51.725 + uname -a 00:00:51.725 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:51.725 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:53.626 Hugepages 00:00:53.626 node hugesize free / total 00:00:53.626 node0 1048576kB 0 / 0 00:00:53.626 node0 2048kB 0 / 0 00:00:53.626 node1 1048576kB 0 / 0 00:00:53.626 node1 2048kB 0 / 0 00:00:53.626 00:00:53.626 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:53.626 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:53.626 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:53.886 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:53.886 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:53.886 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:53.887 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:53.887 + rm -f /tmp/spdk-ld-path 00:00:53.887 + source autorun-spdk.conf 00:00:53.887 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.887 ++ SPDK_TEST_NVMF=1 00:00:53.887 ++ SPDK_TEST_NVME_CLI=1 00:00:53.887 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.887 ++ SPDK_TEST_NVMF_NICS=e810 00:00:53.887 ++ SPDK_RUN_UBSAN=1 00:00:53.887 ++ NET_TYPE=phy 00:00:53.887 ++ RUN_NIGHTLY=1 00:00:53.887 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:53.887 + [[ -n '' ]] 00:00:53.887 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.887 + for M in /var/spdk/build-*-manifest.txt 00:00:53.887 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:53.887 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.887 + for M in /var/spdk/build-*-manifest.txt 00:00:53.887 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:53.887 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.887 ++ uname 00:00:53.887 + [[ Linux == \L\i\n\u\x ]] 00:00:53.887 + sudo dmesg -T 00:00:53.887 + sudo dmesg --clear 00:00:53.887 + dmesg_pid=334762 00:00:53.887 + [[ Fedora Linux == FreeBSD ]] 00:00:53.887 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.887 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.887 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:53.887 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:53.887 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:53.887 + [[ -x /usr/src/fio-static/fio ]] 00:00:53.887 + export FIO_BIN=/usr/src/fio-static/fio 00:00:53.887 + FIO_BIN=/usr/src/fio-static/fio 00:00:53.887 + sudo dmesg -Tw 00:00:53.887 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:53.887 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:53.887 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:53.887 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.887 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.887 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:53.887 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.887 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.887 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:53.887 Test configuration: 00:00:53.887 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.887 SPDK_TEST_NVMF=1 00:00:53.887 SPDK_TEST_NVME_CLI=1 00:00:53.887 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.887 SPDK_TEST_NVMF_NICS=e810 00:00:53.887 SPDK_RUN_UBSAN=1 00:00:53.887 NET_TYPE=phy 00:00:53.887 RUN_NIGHTLY=1 17:26:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:53.887 17:26:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:53.887 17:26:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:53.887 17:26:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:53.887 17:26:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.887 17:26:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.887 17:26:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.887 17:26:15 -- paths/export.sh@5 -- $ export PATH 00:00:53.887 17:26:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.887 17:26:15 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:53.887 17:26:15 -- common/autobuild_common.sh@438 -- $ date +%s 00:00:53.887 17:26:15 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721834775.XXXXXX 00:00:53.887 17:26:15 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721834775.jfGA69 00:00:53.887 17:26:15 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:00:53.887 17:26:15 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:00:53.887 17:26:15 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:53.887 17:26:15 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:53.887 17:26:15 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:53.887 17:26:15 -- common/autobuild_common.sh@454 -- $ get_config_params 00:00:53.887 17:26:15 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:53.887 17:26:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.887 17:26:15 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:53.887 17:26:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.887 17:26:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.887 17:26:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.887 17:26:15 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.887 Wed Jul 24 03:26:15 PM UTC 2024 00:00:53.887 17:26:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:54.146 LTS-60-gdbef7efac 00:00:54.146 17:26:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:54.146 17:26:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:54.146 17:26:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:54.146 17:26:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:54.146 17:26:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:54.146 17:26:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.146 ************************************ 00:00:54.146 START TEST ubsan 00:00:54.146 ************************************ 00:00:54.146 17:26:15 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:54.146 using ubsan 00:00:54.146 00:00:54.146 real 0m0.000s 00:00:54.146 user 0m0.000s 00:00:54.146 sys 0m0.000s 00:00:54.146 17:26:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:54.146 17:26:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.146 ************************************ 00:00:54.146 END TEST ubsan 00:00:54.146 ************************************ 00:00:54.146 17:26:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:54.146 17:26:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:54.146 17:26:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:54.146 17:26:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:54.146 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:54.146 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:54.404 Using 'verbs' RDMA provider 00:01:07.178 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:19.471 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:19.471 Creating mk/config.mk...done. 00:01:19.471 Creating mk/cc.flags.mk...done. 00:01:19.471 Type 'make' to build. 00:01:19.471 17:26:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:19.471 17:26:39 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:19.471 17:26:39 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:19.471 17:26:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.471 ************************************ 00:01:19.471 START TEST make 00:01:19.471 ************************************ 00:01:19.471 17:26:39 -- common/autotest_common.sh@1104 -- $ make -j96 00:01:19.471 make[1]: Nothing to be done for 'all'. 00:01:26.056 The Meson build system 00:01:26.056 Version: 1.3.1 00:01:26.056 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:26.056 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:26.056 Build type: native build 00:01:26.056 Program cat found: YES (/usr/bin/cat) 00:01:26.056 Project name: DPDK 00:01:26.056 Project version: 23.11.0 00:01:26.056 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.056 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.056 Host machine cpu family: x86_64 00:01:26.056 Host machine cpu: x86_64 00:01:26.056 Message: ## Building in Developer Mode ## 00:01:26.056 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.056 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:26.056 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.056 Program python3 found: YES (/usr/bin/python3) 00:01:26.056 Program cat found: YES (/usr/bin/cat) 00:01:26.056 Compiler for C supports arguments -march=native: YES 00:01:26.056 Checking for size of "void *" : 8 00:01:26.056 Checking for size of "void *" : 8 (cached) 00:01:26.056 Library m found: YES 00:01:26.056 Library numa found: YES 00:01:26.056 Has header "numaif.h" : YES 00:01:26.056 Library fdt found: NO 00:01:26.056 Library execinfo found: NO 00:01:26.056 Has header "execinfo.h" : YES 00:01:26.056 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.056 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.056 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.056 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.056 Run-time dependency openssl found: YES 3.0.9 00:01:26.056 Run-time dependency libpcap found: YES 1.10.4 00:01:26.056 Has header "pcap.h" with dependency libpcap: YES 00:01:26.056 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.056 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.056 Compiler for C supports arguments -Wformat: YES 00:01:26.056 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.056 Compiler for C supports arguments -Wformat-security: NO 00:01:26.056 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.056 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.056 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.056 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.056 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.057 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.057 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.057 Compiler for C supports arguments -Wundef: YES 00:01:26.057 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.057 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.057 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.057 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.057 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:26.057 Program objdump found: YES (/usr/bin/objdump) 00:01:26.057 Compiler for C supports arguments -mavx512f: YES 00:01:26.057 Checking if "AVX512 checking" compiles: YES 00:01:26.057 Fetching value of define "__SSE4_2__" : 1 00:01:26.057 Fetching value of define "__AES__" : 1 00:01:26.057 Fetching value of define "__AVX__" : 1 00:01:26.057 Fetching value of define "__AVX2__" : 1 00:01:26.057 Fetching value of define "__AVX512BW__" : 1 00:01:26.057 Fetching value of define "__AVX512CD__" : 1 00:01:26.057 Fetching value of define "__AVX512DQ__" : 1 00:01:26.057 Fetching value of define "__AVX512F__" : 1 00:01:26.057 Fetching value of define "__AVX512VL__" : 1 00:01:26.057 Fetching value of define "__PCLMUL__" : 1 00:01:26.057 Fetching value of define "__RDRND__" : 1 00:01:26.057 Fetching value of define "__RDSEED__" : 1 00:01:26.057 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:26.057 Fetching value of define "__znver1__" : (undefined) 00:01:26.057 Fetching value of define "__znver2__" : (undefined) 00:01:26.057 Fetching value of define "__znver3__" : (undefined) 00:01:26.057 Fetching value of define "__znver4__" : (undefined) 00:01:26.057 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.057 Message: lib/log: Defining dependency "log" 00:01:26.057 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.057 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.057 Checking for function "getentropy" : NO 00:01:26.057 Message: lib/eal: Defining dependency "eal" 00:01:26.057 Message: lib/ring: Defining dependency "ring" 00:01:26.057 Message: lib/rcu: Defining dependency "rcu" 00:01:26.057 Message: lib/mempool: Defining dependency "mempool" 00:01:26.057 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.057 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.057 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:26.057 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:26.057 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:26.057 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:26.057 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:26.057 Compiler for C supports arguments -mpclmul: YES 00:01:26.057 Compiler for C supports arguments -maes: YES 00:01:26.057 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.057 Compiler for C supports arguments -mavx512bw: YES 00:01:26.057 Compiler for C supports arguments -mavx512dq: YES 00:01:26.057 Compiler for C supports arguments -mavx512vl: YES 00:01:26.057 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.057 Compiler for C supports arguments -mavx2: YES 00:01:26.057 Compiler for C supports arguments -mavx: YES 00:01:26.057 Message: lib/net: Defining dependency "net" 00:01:26.057 Message: lib/meter: Defining dependency "meter" 00:01:26.057 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.057 Message: lib/pci: Defining dependency "pci" 00:01:26.057 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.057 Message: lib/hash: Defining dependency "hash" 00:01:26.057 Message: lib/timer: Defining dependency "timer" 00:01:26.057 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.057 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.057 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.057 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:26.057 Message: lib/power: Defining dependency "power" 00:01:26.057 Message: lib/reorder: Defining dependency "reorder" 00:01:26.057 Message: lib/security: Defining dependency "security" 00:01:26.057 Has header "linux/userfaultfd.h" : YES 00:01:26.057 Has header "linux/vduse.h" : YES 00:01:26.057 Message: lib/vhost: Defining dependency "vhost" 00:01:26.057 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.057 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.057 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.057 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.057 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:26.057 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:26.057 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:26.057 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:26.057 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:26.057 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:26.057 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.057 Configuring doxy-api-html.conf using configuration 00:01:26.057 Configuring doxy-api-man.conf using configuration 00:01:26.057 Program mandb found: YES (/usr/bin/mandb) 00:01:26.057 Program sphinx-build found: NO 00:01:26.057 Configuring rte_build_config.h using configuration 00:01:26.057 Message: 00:01:26.057 ================= 00:01:26.057 Applications Enabled 00:01:26.057 ================= 00:01:26.057 00:01:26.057 apps: 00:01:26.057 00:01:26.057 00:01:26.057 Message: 00:01:26.057 ================= 00:01:26.057 Libraries Enabled 00:01:26.057 ================= 00:01:26.057 00:01:26.057 libs: 00:01:26.057 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:26.057 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:26.057 cryptodev, dmadev, power, reorder, security, vhost, 00:01:26.057 00:01:26.057 Message: 00:01:26.057 =============== 00:01:26.057 Drivers Enabled 00:01:26.057 =============== 00:01:26.057 00:01:26.057 common: 00:01:26.057 00:01:26.057 bus: 00:01:26.057 pci, vdev, 00:01:26.057 mempool: 00:01:26.057 ring, 00:01:26.057 dma: 00:01:26.057 00:01:26.057 net: 00:01:26.057 00:01:26.057 crypto: 00:01:26.057 00:01:26.057 compress: 00:01:26.057 00:01:26.057 vdpa: 00:01:26.057 00:01:26.057 00:01:26.057 Message: 00:01:26.057 ================= 00:01:26.057 Content Skipped 00:01:26.057 ================= 00:01:26.057 00:01:26.057 apps: 00:01:26.057 dumpcap: explicitly disabled via build config 00:01:26.057 graph: explicitly disabled via build config 00:01:26.057 pdump: explicitly disabled via build config 00:01:26.057 proc-info: explicitly disabled via build config 00:01:26.057 test-acl: explicitly disabled via build config 00:01:26.057 test-bbdev: explicitly disabled via build config 00:01:26.057 test-cmdline: explicitly disabled via build config 00:01:26.057 test-compress-perf: explicitly disabled via build config 00:01:26.057 test-crypto-perf: explicitly disabled via build config 00:01:26.057 test-dma-perf: explicitly disabled via build config 00:01:26.057 test-eventdev: explicitly disabled via build config 00:01:26.057 test-fib: explicitly disabled via build config 00:01:26.057 test-flow-perf: explicitly disabled via build config 00:01:26.057 test-gpudev: explicitly disabled via build config 00:01:26.057 test-mldev: explicitly disabled via build config 00:01:26.057 test-pipeline: explicitly disabled via build config 00:01:26.057 test-pmd: explicitly disabled via build config 00:01:26.057 test-regex: explicitly disabled via build config 00:01:26.057 test-sad: explicitly disabled via build config 00:01:26.057 test-security-perf: explicitly disabled via build config 00:01:26.057 00:01:26.057 libs: 00:01:26.057 metrics: explicitly disabled via build config 00:01:26.057 acl: explicitly disabled via build config 00:01:26.057 bbdev: explicitly disabled via build config 00:01:26.057 bitratestats: explicitly disabled via build config 00:01:26.057 bpf: explicitly disabled via build config 00:01:26.057 cfgfile: explicitly disabled via build config 00:01:26.057 distributor: explicitly disabled via build config 00:01:26.057 efd: explicitly disabled via build config 00:01:26.057 eventdev: explicitly disabled via build config 00:01:26.057 dispatcher: explicitly disabled via build config 00:01:26.057 gpudev: explicitly disabled via build config 00:01:26.057 gro: explicitly disabled via build config 00:01:26.057 gso: explicitly disabled via build config 00:01:26.057 ip_frag: explicitly disabled via build config 00:01:26.057 jobstats: explicitly disabled via build config 00:01:26.057 latencystats: explicitly disabled via build config 00:01:26.057 lpm: explicitly disabled via build config 00:01:26.057 member: explicitly disabled via build config 00:01:26.057 pcapng: explicitly disabled via build config 00:01:26.057 rawdev: explicitly disabled via build config 00:01:26.057 regexdev: explicitly disabled via build config 00:01:26.057 mldev: explicitly disabled via build config 00:01:26.057 rib: explicitly disabled via build config 00:01:26.057 sched: explicitly disabled via build config 00:01:26.057 stack: explicitly disabled via build config 00:01:26.057 ipsec: explicitly disabled via build config 00:01:26.057 pdcp: explicitly disabled via build config 00:01:26.057 fib: explicitly disabled via build config 00:01:26.057 port: explicitly disabled via build config 00:01:26.057 pdump: explicitly disabled via build config 00:01:26.057 table: explicitly disabled via build config 00:01:26.057 pipeline: explicitly disabled via build config 00:01:26.057 graph: explicitly disabled via build config 00:01:26.057 node: explicitly disabled via build config 00:01:26.057 00:01:26.057 drivers: 00:01:26.057 common/cpt: not in enabled drivers build config 00:01:26.057 common/dpaax: not in enabled drivers build config 00:01:26.057 common/iavf: not in enabled drivers build config 00:01:26.057 common/idpf: not in enabled drivers build config 00:01:26.058 common/mvep: not in enabled drivers build config 00:01:26.058 common/octeontx: not in enabled drivers build config 00:01:26.058 bus/auxiliary: not in enabled drivers build config 00:01:26.058 bus/cdx: not in enabled drivers build config 00:01:26.058 bus/dpaa: not in enabled drivers build config 00:01:26.058 bus/fslmc: not in enabled drivers build config 00:01:26.058 bus/ifpga: not in enabled drivers build config 00:01:26.058 bus/platform: not in enabled drivers build config 00:01:26.058 bus/vmbus: not in enabled drivers build config 00:01:26.058 common/cnxk: not in enabled drivers build config 00:01:26.058 common/mlx5: not in enabled drivers build config 00:01:26.058 common/nfp: not in enabled drivers build config 00:01:26.058 common/qat: not in enabled drivers build config 00:01:26.058 common/sfc_efx: not in enabled drivers build config 00:01:26.058 mempool/bucket: not in enabled drivers build config 00:01:26.058 mempool/cnxk: not in enabled drivers build config 00:01:26.058 mempool/dpaa: not in enabled drivers build config 00:01:26.058 mempool/dpaa2: not in enabled drivers build config 00:01:26.058 mempool/octeontx: not in enabled drivers build config 00:01:26.058 mempool/stack: not in enabled drivers build config 00:01:26.058 dma/cnxk: not in enabled drivers build config 00:01:26.058 dma/dpaa: not in enabled drivers build config 00:01:26.058 dma/dpaa2: not in enabled drivers build config 00:01:26.058 dma/hisilicon: not in enabled drivers build config 00:01:26.058 dma/idxd: not in enabled drivers build config 00:01:26.058 dma/ioat: not in enabled drivers build config 00:01:26.058 dma/skeleton: not in enabled drivers build config 00:01:26.058 net/af_packet: not in enabled drivers build config 00:01:26.058 net/af_xdp: not in enabled drivers build config 00:01:26.058 net/ark: not in enabled drivers build config 00:01:26.058 net/atlantic: not in enabled drivers build config 00:01:26.058 net/avp: not in enabled drivers build config 00:01:26.058 net/axgbe: not in enabled drivers build config 00:01:26.058 net/bnx2x: not in enabled drivers build config 00:01:26.058 net/bnxt: not in enabled drivers build config 00:01:26.058 net/bonding: not in enabled drivers build config 00:01:26.058 net/cnxk: not in enabled drivers build config 00:01:26.058 net/cpfl: not in enabled drivers build config 00:01:26.058 net/cxgbe: not in enabled drivers build config 00:01:26.058 net/dpaa: not in enabled drivers build config 00:01:26.058 net/dpaa2: not in enabled drivers build config 00:01:26.058 net/e1000: not in enabled drivers build config 00:01:26.058 net/ena: not in enabled drivers build config 00:01:26.058 net/enetc: not in enabled drivers build config 00:01:26.058 net/enetfec: not in enabled drivers build config 00:01:26.058 net/enic: not in enabled drivers build config 00:01:26.058 net/failsafe: not in enabled drivers build config 00:01:26.058 net/fm10k: not in enabled drivers build config 00:01:26.058 net/gve: not in enabled drivers build config 00:01:26.058 net/hinic: not in enabled drivers build config 00:01:26.058 net/hns3: not in enabled drivers build config 00:01:26.058 net/i40e: not in enabled drivers build config 00:01:26.058 net/iavf: not in enabled drivers build config 00:01:26.058 net/ice: not in enabled drivers build config 00:01:26.058 net/idpf: not in enabled drivers build config 00:01:26.058 net/igc: not in enabled drivers build config 00:01:26.058 net/ionic: not in enabled drivers build config 00:01:26.058 net/ipn3ke: not in enabled drivers build config 00:01:26.058 net/ixgbe: not in enabled drivers build config 00:01:26.058 net/mana: not in enabled drivers build config 00:01:26.058 net/memif: not in enabled drivers build config 00:01:26.058 net/mlx4: not in enabled drivers build config 00:01:26.058 net/mlx5: not in enabled drivers build config 00:01:26.058 net/mvneta: not in enabled drivers build config 00:01:26.058 net/mvpp2: not in enabled drivers build config 00:01:26.058 net/netvsc: not in enabled drivers build config 00:01:26.058 net/nfb: not in enabled drivers build config 00:01:26.058 net/nfp: not in enabled drivers build config 00:01:26.058 net/ngbe: not in enabled drivers build config 00:01:26.058 net/null: not in enabled drivers build config 00:01:26.058 net/octeontx: not in enabled drivers build config 00:01:26.058 net/octeon_ep: not in enabled drivers build config 00:01:26.058 net/pcap: not in enabled drivers build config 00:01:26.058 net/pfe: not in enabled drivers build config 00:01:26.058 net/qede: not in enabled drivers build config 00:01:26.058 net/ring: not in enabled drivers build config 00:01:26.058 net/sfc: not in enabled drivers build config 00:01:26.058 net/softnic: not in enabled drivers build config 00:01:26.058 net/tap: not in enabled drivers build config 00:01:26.058 net/thunderx: not in enabled drivers build config 00:01:26.058 net/txgbe: not in enabled drivers build config 00:01:26.058 net/vdev_netvsc: not in enabled drivers build config 00:01:26.058 net/vhost: not in enabled drivers build config 00:01:26.058 net/virtio: not in enabled drivers build config 00:01:26.058 net/vmxnet3: not in enabled drivers build config 00:01:26.058 raw/*: missing internal dependency, "rawdev" 00:01:26.058 crypto/armv8: not in enabled drivers build config 00:01:26.058 crypto/bcmfs: not in enabled drivers build config 00:01:26.058 crypto/caam_jr: not in enabled drivers build config 00:01:26.058 crypto/ccp: not in enabled drivers build config 00:01:26.058 crypto/cnxk: not in enabled drivers build config 00:01:26.058 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.058 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.058 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.058 crypto/mlx5: not in enabled drivers build config 00:01:26.058 crypto/mvsam: not in enabled drivers build config 00:01:26.058 crypto/nitrox: not in enabled drivers build config 00:01:26.058 crypto/null: not in enabled drivers build config 00:01:26.058 crypto/octeontx: not in enabled drivers build config 00:01:26.058 crypto/openssl: not in enabled drivers build config 00:01:26.058 crypto/scheduler: not in enabled drivers build config 00:01:26.058 crypto/uadk: not in enabled drivers build config 00:01:26.058 crypto/virtio: not in enabled drivers build config 00:01:26.058 compress/isal: not in enabled drivers build config 00:01:26.058 compress/mlx5: not in enabled drivers build config 00:01:26.058 compress/octeontx: not in enabled drivers build config 00:01:26.058 compress/zlib: not in enabled drivers build config 00:01:26.058 regex/*: missing internal dependency, "regexdev" 00:01:26.058 ml/*: missing internal dependency, "mldev" 00:01:26.058 vdpa/ifc: not in enabled drivers build config 00:01:26.058 vdpa/mlx5: not in enabled drivers build config 00:01:26.058 vdpa/nfp: not in enabled drivers build config 00:01:26.058 vdpa/sfc: not in enabled drivers build config 00:01:26.058 event/*: missing internal dependency, "eventdev" 00:01:26.058 baseband/*: missing internal dependency, "bbdev" 00:01:26.058 gpu/*: missing internal dependency, "gpudev" 00:01:26.058 00:01:26.058 00:01:26.058 Build targets in project: 85 00:01:26.058 00:01:26.058 DPDK 23.11.0 00:01:26.058 00:01:26.058 User defined options 00:01:26.058 buildtype : debug 00:01:26.058 default_library : shared 00:01:26.058 libdir : lib 00:01:26.058 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.058 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:26.058 c_link_args : 00:01:26.058 cpu_instruction_set: native 00:01:26.058 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:26.058 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:26.058 enable_docs : false 00:01:26.058 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:26.058 enable_kmods : false 00:01:26.058 tests : false 00:01:26.058 00:01:26.058 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.058 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.058 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.058 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.058 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.058 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.058 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.058 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.058 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.058 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:26.058 [9/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:26.058 [10/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:26.058 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:26.058 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:26.058 [13/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.058 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:26.058 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.059 [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:26.318 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.318 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.318 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.318 [20/265] Linking static target lib/librte_kvargs.a 00:01:26.318 [21/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:26.318 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:26.318 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:26.318 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:26.318 [25/265] Linking static target lib/librte_log.a 00:01:26.318 [26/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:26.318 [27/265] Linking static target lib/librte_pci.a 00:01:26.318 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:26.318 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.318 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.318 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:26.318 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.318 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.318 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.318 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.318 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.318 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.577 [38/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:26.577 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:26.577 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:26.577 [41/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:26.577 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:26.577 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:26.577 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:26.577 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:26.577 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:26.577 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:26.577 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:26.577 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:26.577 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:26.577 [51/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:26.577 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:26.577 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:26.577 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:26.577 [55/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:26.577 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:26.577 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:26.577 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:26.577 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:26.577 [60/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:26.577 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:26.577 [62/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:26.577 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:26.577 [64/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:26.577 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:26.577 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:26.577 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:26.577 [68/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:26.577 [69/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:26.577 [70/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:26.577 [71/265] Linking static target lib/librte_telemetry.a 00:01:26.577 [72/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:26.577 [73/265] Linking static target lib/librte_ring.a 00:01:26.577 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:26.577 [75/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:26.577 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:26.577 [77/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:26.577 [78/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.577 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:26.577 [80/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:26.577 [81/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:26.577 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:26.577 [83/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:26.577 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:26.577 [85/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:26.577 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:26.577 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:26.577 [88/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:26.577 [89/265] Linking static target lib/librte_meter.a 00:01:26.577 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:26.577 [91/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:26.577 [92/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.577 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:26.577 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:26.577 [95/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:26.836 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:26.836 [97/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:26.836 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:26.836 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:26.836 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:26.836 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:26.836 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:26.836 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:26.836 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:26.836 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:26.836 [106/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:26.836 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:26.836 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.836 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:26.836 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:26.836 [111/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:26.836 [112/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:26.836 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:26.836 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:26.836 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:26.836 [116/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:26.836 [117/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:26.836 [118/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:26.836 [119/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:26.836 [120/265] Linking static target lib/librte_rcu.a 00:01:26.836 [121/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:26.836 [122/265] Linking static target lib/librte_cmdline.a 00:01:26.836 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:26.836 [124/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:26.836 [125/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:26.836 [126/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:26.836 [127/265] Linking static target lib/librte_mempool.a 00:01:26.836 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:26.836 [129/265] Linking static target lib/librte_eal.a 00:01:26.836 [130/265] Linking static target lib/librte_timer.a 00:01:26.836 [131/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:26.836 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:26.836 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:26.836 [134/265] Linking static target lib/librte_net.a 00:01:26.836 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:26.836 [136/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.836 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:26.836 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:26.836 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:26.836 [140/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:26.836 [141/265] Linking static target lib/librte_compressdev.a 00:01:26.836 [142/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:26.836 [143/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.836 [144/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.836 [145/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.836 [146/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.836 [147/265] Linking static target lib/librte_mbuf.a 00:01:26.836 [148/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.836 [149/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.095 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.095 [151/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.095 [152/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.095 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.095 [154/265] Linking target lib/librte_log.so.24.0 00:01:27.095 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.095 [156/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.095 [157/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.095 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.095 [159/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.095 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.095 [161/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.095 [162/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.095 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.095 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.095 [165/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.095 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.095 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.095 [168/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.095 [169/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.095 [170/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:27.095 [171/265] Linking static target lib/librte_hash.a 00:01:27.095 [172/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.095 [173/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.095 [174/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.095 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.095 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.095 [177/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.095 [178/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.095 [179/265] Linking static target lib/librte_power.a 00:01:27.095 [180/265] Linking target lib/librte_telemetry.so.24.0 00:01:27.095 [181/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.095 [182/265] Linking target lib/librte_kvargs.so.24.0 00:01:27.095 [183/265] Linking static target lib/librte_dmadev.a 00:01:27.095 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.095 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.095 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.095 [187/265] Linking static target lib/librte_reorder.a 00:01:27.355 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.355 [189/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.355 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.356 [191/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:27.356 [192/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:27.356 [193/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.356 [194/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.356 [195/265] Linking static target lib/librte_security.a 00:01:27.356 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.356 [197/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.356 [198/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.356 [199/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.356 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.356 [201/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.356 [202/265] Linking static target drivers/librte_mempool_ring.a 00:01:27.356 [203/265] Linking static target drivers/librte_bus_vdev.a 00:01:27.356 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.356 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.356 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.615 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.615 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:27.615 [209/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.615 [210/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.615 [211/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.615 [212/265] Linking static target lib/librte_cryptodev.a 00:01:27.615 [213/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.615 [214/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.615 [215/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.615 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.875 [217/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.875 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:27.875 [219/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.875 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.875 [221/265] Linking static target lib/librte_ethdev.a 00:01:27.875 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:27.875 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.135 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.070 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:29.070 [226/265] Linking static target lib/librte_vhost.a 00:01:29.329 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.899 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.174 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.174 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.174 [231/265] Linking target lib/librte_eal.so.24.0 00:01:36.174 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:36.174 [233/265] Linking target lib/librte_ring.so.24.0 00:01:36.174 [234/265] Linking target lib/librte_meter.so.24.0 00:01:36.174 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:36.174 [236/265] Linking target lib/librte_timer.so.24.0 00:01:36.174 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:36.174 [238/265] Linking target lib/librte_pci.so.24.0 00:01:36.174 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:36.174 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:36.174 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:36.174 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:36.433 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:36.433 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:36.433 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:36.433 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:36.433 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:36.433 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:36.433 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:36.433 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:36.693 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:36.693 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:36.693 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:36.693 [254/265] Linking target lib/librte_net.so.24.0 00:01:36.693 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:36.693 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:36.693 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:36.693 [258/265] Linking target lib/librte_cmdline.so.24.0 00:01:36.693 [259/265] Linking target lib/librte_security.so.24.0 00:01:36.693 [260/265] Linking target lib/librte_hash.so.24.0 00:01:36.693 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:36.952 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:36.952 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:36.952 [264/265] Linking target lib/librte_power.so.24.0 00:01:36.952 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:36.952 INFO: autodetecting backend as ninja 00:01:36.952 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:37.890 CC lib/ut/ut.o 00:01:37.890 CC lib/log/log.o 00:01:37.890 CC lib/log/log_flags.o 00:01:37.890 CC lib/ut_mock/mock.o 00:01:37.890 CC lib/log/log_deprecated.o 00:01:37.890 LIB libspdk_ut.a 00:01:37.890 LIB libspdk_ut_mock.a 00:01:37.890 SO libspdk_ut.so.1.0 00:01:37.890 LIB libspdk_log.a 00:01:37.890 SO libspdk_ut_mock.so.5.0 00:01:38.149 SYMLINK libspdk_ut.so 00:01:38.149 SO libspdk_log.so.6.1 00:01:38.149 SYMLINK libspdk_ut_mock.so 00:01:38.149 SYMLINK libspdk_log.so 00:01:38.149 CXX lib/trace_parser/trace.o 00:01:38.409 CC lib/dma/dma.o 00:01:38.409 CC lib/util/base64.o 00:01:38.409 CC lib/util/cpuset.o 00:01:38.409 CC lib/util/bit_array.o 00:01:38.409 CC lib/util/crc32.o 00:01:38.409 CC lib/util/crc16.o 00:01:38.409 CC lib/util/crc64.o 00:01:38.409 CC lib/util/crc32c.o 00:01:38.409 CC lib/util/crc32_ieee.o 00:01:38.409 CC lib/util/dif.o 00:01:38.409 CC lib/util/fd.o 00:01:38.409 CC lib/util/file.o 00:01:38.409 CC lib/ioat/ioat.o 00:01:38.409 CC lib/util/hexlify.o 00:01:38.409 CC lib/util/iov.o 00:01:38.409 CC lib/util/math.o 00:01:38.409 CC lib/util/pipe.o 00:01:38.409 CC lib/util/strerror_tls.o 00:01:38.409 CC lib/util/string.o 00:01:38.409 CC lib/util/uuid.o 00:01:38.409 CC lib/util/fd_group.o 00:01:38.409 CC lib/util/xor.o 00:01:38.409 CC lib/util/zipf.o 00:01:38.409 CC lib/vfio_user/host/vfio_user_pci.o 00:01:38.409 CC lib/vfio_user/host/vfio_user.o 00:01:38.409 LIB libspdk_dma.a 00:01:38.409 SO libspdk_dma.so.3.0 00:01:38.668 LIB libspdk_ioat.a 00:01:38.668 SYMLINK libspdk_dma.so 00:01:38.668 SO libspdk_ioat.so.6.0 00:01:38.668 LIB libspdk_vfio_user.a 00:01:38.668 SO libspdk_vfio_user.so.4.0 00:01:38.668 SYMLINK libspdk_ioat.so 00:01:38.668 SYMLINK libspdk_vfio_user.so 00:01:38.668 LIB libspdk_util.a 00:01:38.668 SO libspdk_util.so.8.0 00:01:38.928 SYMLINK libspdk_util.so 00:01:38.928 LIB libspdk_trace_parser.a 00:01:38.928 SO libspdk_trace_parser.so.4.0 00:01:38.928 SYMLINK libspdk_trace_parser.so 00:01:38.928 CC lib/env_dpdk/env.o 00:01:38.928 CC lib/env_dpdk/memory.o 00:01:38.928 CC lib/env_dpdk/pci.o 00:01:38.928 CC lib/env_dpdk/init.o 00:01:38.928 CC lib/env_dpdk/threads.o 00:01:38.928 CC lib/env_dpdk/pci_ioat.o 00:01:38.928 CC lib/env_dpdk/pci_virtio.o 00:01:38.928 CC lib/env_dpdk/pci_idxd.o 00:01:38.928 CC lib/env_dpdk/pci_vmd.o 00:01:38.928 CC lib/env_dpdk/pci_event.o 00:01:38.928 CC lib/vmd/vmd.o 00:01:38.928 CC lib/vmd/led.o 00:01:38.928 CC lib/env_dpdk/sigbus_handler.o 00:01:38.928 CC lib/env_dpdk/pci_dpdk.o 00:01:38.928 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.928 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:39.187 CC lib/rdma/common.o 00:01:39.187 CC lib/rdma/rdma_verbs.o 00:01:39.187 CC lib/idxd/idxd.o 00:01:39.187 CC lib/idxd/idxd_user.o 00:01:39.187 CC lib/idxd/idxd_kernel.o 00:01:39.187 CC lib/json/json_parse.o 00:01:39.187 CC lib/json/json_util.o 00:01:39.187 CC lib/json/json_write.o 00:01:39.187 CC lib/conf/conf.o 00:01:39.187 LIB libspdk_conf.a 00:01:39.187 LIB libspdk_rdma.a 00:01:39.187 SO libspdk_conf.so.5.0 00:01:39.445 LIB libspdk_json.a 00:01:39.445 SO libspdk_rdma.so.5.0 00:01:39.445 SYMLINK libspdk_conf.so 00:01:39.445 SO libspdk_json.so.5.1 00:01:39.445 SYMLINK libspdk_rdma.so 00:01:39.445 SYMLINK libspdk_json.so 00:01:39.445 LIB libspdk_idxd.a 00:01:39.445 SO libspdk_idxd.so.11.0 00:01:39.445 LIB libspdk_vmd.a 00:01:39.704 SO libspdk_vmd.so.5.0 00:01:39.704 SYMLINK libspdk_idxd.so 00:01:39.704 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.704 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.704 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.704 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:39.704 SYMLINK libspdk_vmd.so 00:01:39.704 LIB libspdk_jsonrpc.a 00:01:39.963 SO libspdk_jsonrpc.so.5.1 00:01:39.963 SYMLINK libspdk_jsonrpc.so 00:01:39.963 LIB libspdk_env_dpdk.a 00:01:40.222 SO libspdk_env_dpdk.so.13.0 00:01:40.222 CC lib/rpc/rpc.o 00:01:40.222 SYMLINK libspdk_env_dpdk.so 00:01:40.222 LIB libspdk_rpc.a 00:01:40.222 SO libspdk_rpc.so.5.0 00:01:40.481 SYMLINK libspdk_rpc.so 00:01:40.481 CC lib/notify/notify.o 00:01:40.481 CC lib/notify/notify_rpc.o 00:01:40.481 CC lib/trace/trace_rpc.o 00:01:40.481 CC lib/trace/trace.o 00:01:40.481 CC lib/sock/sock.o 00:01:40.481 CC lib/trace/trace_flags.o 00:01:40.481 CC lib/sock/sock_rpc.o 00:01:40.740 LIB libspdk_notify.a 00:01:40.740 SO libspdk_notify.so.5.0 00:01:40.740 LIB libspdk_trace.a 00:01:40.740 SO libspdk_trace.so.9.0 00:01:40.740 SYMLINK libspdk_notify.so 00:01:41.000 LIB libspdk_sock.a 00:01:41.000 SYMLINK libspdk_trace.so 00:01:41.000 SO libspdk_sock.so.8.0 00:01:41.000 SYMLINK libspdk_sock.so 00:01:41.000 CC lib/thread/thread.o 00:01:41.000 CC lib/thread/iobuf.o 00:01:41.259 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:41.259 CC lib/nvme/nvme_ctrlr.o 00:01:41.259 CC lib/nvme/nvme_fabric.o 00:01:41.259 CC lib/nvme/nvme_ns_cmd.o 00:01:41.259 CC lib/nvme/nvme_ns.o 00:01:41.259 CC lib/nvme/nvme_pcie_common.o 00:01:41.259 CC lib/nvme/nvme_pcie.o 00:01:41.259 CC lib/nvme/nvme_qpair.o 00:01:41.259 CC lib/nvme/nvme.o 00:01:41.259 CC lib/nvme/nvme_quirks.o 00:01:41.259 CC lib/nvme/nvme_transport.o 00:01:41.259 CC lib/nvme/nvme_discovery.o 00:01:41.259 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:41.259 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:41.259 CC lib/nvme/nvme_tcp.o 00:01:41.259 CC lib/nvme/nvme_io_msg.o 00:01:41.259 CC lib/nvme/nvme_opal.o 00:01:41.259 CC lib/nvme/nvme_poll_group.o 00:01:41.259 CC lib/nvme/nvme_zns.o 00:01:41.259 CC lib/nvme/nvme_cuse.o 00:01:41.259 CC lib/nvme/nvme_vfio_user.o 00:01:41.259 CC lib/nvme/nvme_rdma.o 00:01:42.197 LIB libspdk_thread.a 00:01:42.197 SO libspdk_thread.so.9.0 00:01:42.197 SYMLINK libspdk_thread.so 00:01:42.456 CC lib/blob/blobstore.o 00:01:42.456 CC lib/virtio/virtio.o 00:01:42.456 CC lib/virtio/virtio_vfio_user.o 00:01:42.456 CC lib/virtio/virtio_vhost_user.o 00:01:42.456 CC lib/blob/request.o 00:01:42.456 CC lib/virtio/virtio_pci.o 00:01:42.456 CC lib/blob/zeroes.o 00:01:42.456 CC lib/blob/blob_bs_dev.o 00:01:42.456 CC lib/accel/accel.o 00:01:42.456 CC lib/accel/accel_rpc.o 00:01:42.456 CC lib/accel/accel_sw.o 00:01:42.456 CC lib/init/json_config.o 00:01:42.456 CC lib/init/subsystem_rpc.o 00:01:42.456 CC lib/init/subsystem.o 00:01:42.456 CC lib/init/rpc.o 00:01:42.715 LIB libspdk_init.a 00:01:42.715 SO libspdk_init.so.4.0 00:01:42.715 LIB libspdk_nvme.a 00:01:42.715 LIB libspdk_virtio.a 00:01:42.715 SO libspdk_virtio.so.6.0 00:01:42.715 SYMLINK libspdk_init.so 00:01:42.715 SO libspdk_nvme.so.12.0 00:01:42.715 SYMLINK libspdk_virtio.so 00:01:42.976 CC lib/event/app.o 00:01:42.976 CC lib/event/reactor.o 00:01:42.976 CC lib/event/app_rpc.o 00:01:42.976 CC lib/event/log_rpc.o 00:01:42.976 CC lib/event/scheduler_static.o 00:01:42.976 SYMLINK libspdk_nvme.so 00:01:43.286 LIB libspdk_accel.a 00:01:43.286 SO libspdk_accel.so.14.0 00:01:43.286 LIB libspdk_event.a 00:01:43.286 SYMLINK libspdk_accel.so 00:01:43.286 SO libspdk_event.so.12.0 00:01:43.286 SYMLINK libspdk_event.so 00:01:43.549 CC lib/bdev/bdev.o 00:01:43.549 CC lib/bdev/bdev_rpc.o 00:01:43.549 CC lib/bdev/bdev_zone.o 00:01:43.549 CC lib/bdev/part.o 00:01:43.549 CC lib/bdev/scsi_nvme.o 00:01:44.486 LIB libspdk_blob.a 00:01:44.486 SO libspdk_blob.so.10.1 00:01:44.486 SYMLINK libspdk_blob.so 00:01:44.745 CC lib/lvol/lvol.o 00:01:44.745 CC lib/blobfs/blobfs.o 00:01:44.745 CC lib/blobfs/tree.o 00:01:45.312 LIB libspdk_bdev.a 00:01:45.312 LIB libspdk_blobfs.a 00:01:45.312 SO libspdk_bdev.so.14.0 00:01:45.312 SO libspdk_blobfs.so.9.0 00:01:45.312 LIB libspdk_lvol.a 00:01:45.312 SYMLINK libspdk_bdev.so 00:01:45.312 SO libspdk_lvol.so.9.1 00:01:45.312 SYMLINK libspdk_blobfs.so 00:01:45.312 SYMLINK libspdk_lvol.so 00:01:45.574 CC lib/nbd/nbd.o 00:01:45.574 CC lib/nvmf/ctrlr.o 00:01:45.574 CC lib/nbd/nbd_rpc.o 00:01:45.574 CC lib/nvmf/ctrlr_discovery.o 00:01:45.574 CC lib/nvmf/ctrlr_bdev.o 00:01:45.574 CC lib/nvmf/nvmf.o 00:01:45.574 CC lib/nvmf/transport.o 00:01:45.574 CC lib/nvmf/subsystem.o 00:01:45.574 CC lib/nvmf/nvmf_rpc.o 00:01:45.574 CC lib/nvmf/tcp.o 00:01:45.574 CC lib/nvmf/rdma.o 00:01:45.574 CC lib/ftl/ftl_init.o 00:01:45.574 CC lib/scsi/dev.o 00:01:45.574 CC lib/scsi/lun.o 00:01:45.574 CC lib/ftl/ftl_core.o 00:01:45.574 CC lib/scsi/port.o 00:01:45.574 CC lib/ublk/ublk.o 00:01:45.574 CC lib/ftl/ftl_layout.o 00:01:45.574 CC lib/scsi/scsi.o 00:01:45.574 CC lib/ftl/ftl_debug.o 00:01:45.574 CC lib/scsi/scsi_bdev.o 00:01:45.574 CC lib/ftl/ftl_io.o 00:01:45.574 CC lib/ublk/ublk_rpc.o 00:01:45.574 CC lib/scsi/scsi_pr.o 00:01:45.574 CC lib/ftl/ftl_sb.o 00:01:45.574 CC lib/ftl/ftl_l2p.o 00:01:45.574 CC lib/scsi/scsi_rpc.o 00:01:45.574 CC lib/ftl/ftl_l2p_flat.o 00:01:45.574 CC lib/scsi/task.o 00:01:45.574 CC lib/ftl/ftl_nv_cache.o 00:01:45.574 CC lib/ftl/ftl_band.o 00:01:45.574 CC lib/ftl/ftl_band_ops.o 00:01:45.574 CC lib/ftl/ftl_writer.o 00:01:45.574 CC lib/ftl/ftl_rq.o 00:01:45.574 CC lib/ftl/ftl_reloc.o 00:01:45.574 CC lib/ftl/ftl_l2p_cache.o 00:01:45.574 CC lib/ftl/ftl_p2l.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:45.574 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:45.574 CC lib/ftl/utils/ftl_conf.o 00:01:45.574 CC lib/ftl/utils/ftl_md.o 00:01:45.574 CC lib/ftl/utils/ftl_mempool.o 00:01:45.574 CC lib/ftl/utils/ftl_bitmap.o 00:01:45.574 CC lib/ftl/utils/ftl_property.o 00:01:45.574 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:45.574 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:45.574 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:45.574 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:45.574 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:45.574 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:45.574 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:45.574 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:45.574 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:45.574 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:45.574 CC lib/ftl/base/ftl_base_dev.o 00:01:45.574 CC lib/ftl/ftl_trace.o 00:01:45.574 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.143 LIB libspdk_nbd.a 00:01:46.143 LIB libspdk_scsi.a 00:01:46.143 SO libspdk_nbd.so.6.0 00:01:46.143 SO libspdk_scsi.so.8.0 00:01:46.143 SYMLINK libspdk_nbd.so 00:01:46.143 LIB libspdk_ublk.a 00:01:46.143 SYMLINK libspdk_scsi.so 00:01:46.143 SO libspdk_ublk.so.2.0 00:01:46.143 SYMLINK libspdk_ublk.so 00:01:46.403 CC lib/vhost/vhost_rpc.o 00:01:46.403 CC lib/vhost/vhost.o 00:01:46.403 CC lib/vhost/vhost_blk.o 00:01:46.403 CC lib/iscsi/conn.o 00:01:46.403 CC lib/vhost/vhost_scsi.o 00:01:46.403 CC lib/iscsi/init_grp.o 00:01:46.403 CC lib/vhost/rte_vhost_user.o 00:01:46.403 LIB libspdk_ftl.a 00:01:46.403 CC lib/iscsi/iscsi.o 00:01:46.403 CC lib/iscsi/md5.o 00:01:46.403 CC lib/iscsi/param.o 00:01:46.403 CC lib/iscsi/portal_grp.o 00:01:46.403 CC lib/iscsi/tgt_node.o 00:01:46.403 CC lib/iscsi/iscsi_subsystem.o 00:01:46.403 CC lib/iscsi/iscsi_rpc.o 00:01:46.403 CC lib/iscsi/task.o 00:01:46.403 SO libspdk_ftl.so.8.0 00:01:46.662 SYMLINK libspdk_ftl.so 00:01:47.232 LIB libspdk_vhost.a 00:01:47.232 LIB libspdk_nvmf.a 00:01:47.232 SO libspdk_vhost.so.7.1 00:01:47.232 SO libspdk_nvmf.so.17.0 00:01:47.232 SYMLINK libspdk_vhost.so 00:01:47.232 LIB libspdk_iscsi.a 00:01:47.232 SYMLINK libspdk_nvmf.so 00:01:47.232 SO libspdk_iscsi.so.7.0 00:01:47.491 SYMLINK libspdk_iscsi.so 00:01:47.750 CC module/env_dpdk/env_dpdk_rpc.o 00:01:47.750 CC module/blob/bdev/blob_bdev.o 00:01:47.750 CC module/accel/ioat/accel_ioat.o 00:01:47.750 CC module/accel/ioat/accel_ioat_rpc.o 00:01:47.750 CC module/accel/dsa/accel_dsa_rpc.o 00:01:47.750 CC module/accel/dsa/accel_dsa.o 00:01:47.750 CC module/accel/iaa/accel_iaa_rpc.o 00:01:47.750 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:47.750 CC module/accel/iaa/accel_iaa.o 00:01:47.750 CC module/sock/posix/posix.o 00:01:47.750 CC module/accel/error/accel_error_rpc.o 00:01:47.750 CC module/accel/error/accel_error.o 00:01:47.750 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:47.750 CC module/scheduler/gscheduler/gscheduler.o 00:01:47.750 LIB libspdk_env_dpdk_rpc.a 00:01:48.009 SO libspdk_env_dpdk_rpc.so.5.0 00:01:48.009 SYMLINK libspdk_env_dpdk_rpc.so 00:01:48.009 LIB libspdk_accel_ioat.a 00:01:48.009 LIB libspdk_scheduler_gscheduler.a 00:01:48.009 LIB libspdk_scheduler_dpdk_governor.a 00:01:48.009 SO libspdk_accel_ioat.so.5.0 00:01:48.009 LIB libspdk_accel_iaa.a 00:01:48.009 LIB libspdk_blob_bdev.a 00:01:48.009 SO libspdk_scheduler_gscheduler.so.3.0 00:01:48.009 LIB libspdk_accel_error.a 00:01:48.009 LIB libspdk_scheduler_dynamic.a 00:01:48.009 LIB libspdk_accel_dsa.a 00:01:48.009 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:48.009 SO libspdk_accel_iaa.so.2.0 00:01:48.009 SYMLINK libspdk_accel_ioat.so 00:01:48.009 SO libspdk_blob_bdev.so.10.1 00:01:48.009 SO libspdk_scheduler_dynamic.so.3.0 00:01:48.009 SO libspdk_accel_error.so.1.0 00:01:48.009 SYMLINK libspdk_scheduler_gscheduler.so 00:01:48.009 SO libspdk_accel_dsa.so.4.0 00:01:48.009 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:48.009 SYMLINK libspdk_accel_iaa.so 00:01:48.009 SYMLINK libspdk_scheduler_dynamic.so 00:01:48.009 SYMLINK libspdk_blob_bdev.so 00:01:48.009 SYMLINK libspdk_accel_error.so 00:01:48.009 SYMLINK libspdk_accel_dsa.so 00:01:48.268 LIB libspdk_sock_posix.a 00:01:48.268 CC module/bdev/gpt/vbdev_gpt.o 00:01:48.268 CC module/bdev/gpt/gpt.o 00:01:48.526 CC module/bdev/aio/bdev_aio_rpc.o 00:01:48.527 CC module/bdev/aio/bdev_aio.o 00:01:48.527 CC module/blobfs/bdev/blobfs_bdev.o 00:01:48.527 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:48.527 CC module/bdev/malloc/bdev_malloc.o 00:01:48.527 CC module/bdev/lvol/vbdev_lvol.o 00:01:48.527 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:48.527 CC module/bdev/raid/bdev_raid_rpc.o 00:01:48.527 CC module/bdev/raid/raid0.o 00:01:48.527 CC module/bdev/raid/bdev_raid.o 00:01:48.527 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:48.527 CC module/bdev/split/vbdev_split.o 00:01:48.527 CC module/bdev/split/vbdev_split_rpc.o 00:01:48.527 CC module/bdev/raid/bdev_raid_sb.o 00:01:48.527 CC module/bdev/raid/raid1.o 00:01:48.527 CC module/bdev/raid/concat.o 00:01:48.527 CC module/bdev/delay/vbdev_delay.o 00:01:48.527 CC module/bdev/passthru/vbdev_passthru.o 00:01:48.527 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:48.527 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:48.527 CC module/bdev/error/vbdev_error.o 00:01:48.527 CC module/bdev/null/bdev_null.o 00:01:48.527 CC module/bdev/error/vbdev_error_rpc.o 00:01:48.527 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:48.527 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:48.527 CC module/bdev/null/bdev_null_rpc.o 00:01:48.527 CC module/bdev/nvme/bdev_nvme.o 00:01:48.527 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:48.527 CC module/bdev/nvme/nvme_rpc.o 00:01:48.527 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:48.527 CC module/bdev/nvme/vbdev_opal.o 00:01:48.527 CC module/bdev/nvme/bdev_mdns_client.o 00:01:48.527 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:48.527 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:48.527 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:48.527 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:48.527 SO libspdk_sock_posix.so.5.0 00:01:48.527 CC module/bdev/iscsi/bdev_iscsi.o 00:01:48.527 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:48.527 CC module/bdev/ftl/bdev_ftl.o 00:01:48.527 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:48.527 SYMLINK libspdk_sock_posix.so 00:01:48.527 LIB libspdk_blobfs_bdev.a 00:01:48.527 SO libspdk_blobfs_bdev.so.5.0 00:01:48.527 LIB libspdk_bdev_split.a 00:01:48.786 SO libspdk_bdev_split.so.5.0 00:01:48.786 LIB libspdk_bdev_null.a 00:01:48.786 SYMLINK libspdk_blobfs_bdev.so 00:01:48.786 LIB libspdk_bdev_gpt.a 00:01:48.786 LIB libspdk_bdev_error.a 00:01:48.786 SYMLINK libspdk_bdev_split.so 00:01:48.786 LIB libspdk_bdev_passthru.a 00:01:48.786 SO libspdk_bdev_null.so.5.0 00:01:48.786 SO libspdk_bdev_error.so.5.0 00:01:48.786 SO libspdk_bdev_gpt.so.5.0 00:01:48.786 SO libspdk_bdev_passthru.so.5.0 00:01:48.786 LIB libspdk_bdev_aio.a 00:01:48.786 LIB libspdk_bdev_zone_block.a 00:01:48.786 LIB libspdk_bdev_delay.a 00:01:48.786 SYMLINK libspdk_bdev_null.so 00:01:48.786 LIB libspdk_bdev_ftl.a 00:01:48.786 SO libspdk_bdev_aio.so.5.0 00:01:48.786 LIB libspdk_bdev_malloc.a 00:01:48.786 SYMLINK libspdk_bdev_gpt.so 00:01:48.786 SO libspdk_bdev_delay.so.5.0 00:01:48.786 SO libspdk_bdev_zone_block.so.5.0 00:01:48.786 SO libspdk_bdev_ftl.so.5.0 00:01:48.786 SYMLINK libspdk_bdev_error.so 00:01:48.786 LIB libspdk_bdev_iscsi.a 00:01:48.786 SYMLINK libspdk_bdev_passthru.so 00:01:48.786 SO libspdk_bdev_malloc.so.5.0 00:01:48.786 SO libspdk_bdev_iscsi.so.5.0 00:01:48.786 SYMLINK libspdk_bdev_aio.so 00:01:48.786 SYMLINK libspdk_bdev_delay.so 00:01:48.786 SYMLINK libspdk_bdev_zone_block.so 00:01:48.786 SYMLINK libspdk_bdev_ftl.so 00:01:48.786 LIB libspdk_bdev_lvol.a 00:01:48.786 SYMLINK libspdk_bdev_malloc.so 00:01:48.786 SYMLINK libspdk_bdev_iscsi.so 00:01:48.786 LIB libspdk_bdev_virtio.a 00:01:48.786 SO libspdk_bdev_lvol.so.5.0 00:01:49.045 SO libspdk_bdev_virtio.so.5.0 00:01:49.045 SYMLINK libspdk_bdev_lvol.so 00:01:49.045 SYMLINK libspdk_bdev_virtio.so 00:01:49.045 LIB libspdk_bdev_raid.a 00:01:49.303 SO libspdk_bdev_raid.so.5.0 00:01:49.303 SYMLINK libspdk_bdev_raid.so 00:01:50.238 LIB libspdk_bdev_nvme.a 00:01:50.238 SO libspdk_bdev_nvme.so.6.0 00:01:50.238 SYMLINK libspdk_bdev_nvme.so 00:01:50.497 CC module/event/subsystems/sock/sock.o 00:01:50.497 CC module/event/subsystems/scheduler/scheduler.o 00:01:50.497 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:50.497 CC module/event/subsystems/iobuf/iobuf.o 00:01:50.497 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:50.497 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:50.497 CC module/event/subsystems/vmd/vmd.o 00:01:50.757 LIB libspdk_event_vhost_blk.a 00:01:50.757 LIB libspdk_event_sock.a 00:01:50.757 LIB libspdk_event_scheduler.a 00:01:50.757 SO libspdk_event_vhost_blk.so.2.0 00:01:50.757 SO libspdk_event_sock.so.4.0 00:01:50.757 LIB libspdk_event_vmd.a 00:01:50.757 LIB libspdk_event_iobuf.a 00:01:50.757 SO libspdk_event_scheduler.so.3.0 00:01:50.757 SYMLINK libspdk_event_vhost_blk.so 00:01:50.757 SO libspdk_event_vmd.so.5.0 00:01:50.757 SO libspdk_event_iobuf.so.2.0 00:01:50.757 SYMLINK libspdk_event_sock.so 00:01:50.757 SYMLINK libspdk_event_scheduler.so 00:01:50.757 SYMLINK libspdk_event_iobuf.so 00:01:50.757 SYMLINK libspdk_event_vmd.so 00:01:51.016 CC module/event/subsystems/accel/accel.o 00:01:51.276 LIB libspdk_event_accel.a 00:01:51.276 SO libspdk_event_accel.so.5.0 00:01:51.276 SYMLINK libspdk_event_accel.so 00:01:51.536 CC module/event/subsystems/bdev/bdev.o 00:01:51.536 LIB libspdk_event_bdev.a 00:01:51.536 SO libspdk_event_bdev.so.5.0 00:01:51.795 SYMLINK libspdk_event_bdev.so 00:01:51.795 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:51.795 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:51.795 CC module/event/subsystems/ublk/ublk.o 00:01:51.795 CC module/event/subsystems/scsi/scsi.o 00:01:51.795 CC module/event/subsystems/nbd/nbd.o 00:01:52.055 LIB libspdk_event_ublk.a 00:01:52.055 LIB libspdk_event_nbd.a 00:01:52.055 LIB libspdk_event_scsi.a 00:01:52.055 LIB libspdk_event_nvmf.a 00:01:52.055 SO libspdk_event_ublk.so.2.0 00:01:52.055 SO libspdk_event_nbd.so.5.0 00:01:52.055 SO libspdk_event_scsi.so.5.0 00:01:52.055 SO libspdk_event_nvmf.so.5.0 00:01:52.055 SYMLINK libspdk_event_ublk.so 00:01:52.055 SYMLINK libspdk_event_nbd.so 00:01:52.055 SYMLINK libspdk_event_scsi.so 00:01:52.055 SYMLINK libspdk_event_nvmf.so 00:01:52.314 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:52.314 CC module/event/subsystems/iscsi/iscsi.o 00:01:52.573 LIB libspdk_event_vhost_scsi.a 00:01:52.573 SO libspdk_event_vhost_scsi.so.2.0 00:01:52.573 LIB libspdk_event_iscsi.a 00:01:52.573 SYMLINK libspdk_event_vhost_scsi.so 00:01:52.573 SO libspdk_event_iscsi.so.5.0 00:01:52.573 SYMLINK libspdk_event_iscsi.so 00:01:52.573 SO libspdk.so.5.0 00:01:52.573 SYMLINK libspdk.so 00:01:52.832 CC app/spdk_lspci/spdk_lspci.o 00:01:52.832 CC app/spdk_nvme_identify/identify.o 00:01:52.832 CC test/rpc_client/rpc_client_test.o 00:01:52.832 CC app/spdk_nvme_discover/discovery_aer.o 00:01:52.832 CXX app/trace/trace.o 00:01:52.832 CC app/spdk_top/spdk_top.o 00:01:52.832 CC app/spdk_nvme_perf/perf.o 00:01:52.832 CC app/trace_record/trace_record.o 00:01:52.832 TEST_HEADER include/spdk/accel_module.h 00:01:52.832 TEST_HEADER include/spdk/accel.h 00:01:52.832 TEST_HEADER include/spdk/base64.h 00:01:52.833 TEST_HEADER include/spdk/assert.h 00:01:52.833 TEST_HEADER include/spdk/barrier.h 00:01:52.833 TEST_HEADER include/spdk/bdev_module.h 00:01:52.833 TEST_HEADER include/spdk/bdev.h 00:01:52.833 TEST_HEADER include/spdk/bdev_zone.h 00:01:52.833 TEST_HEADER include/spdk/bit_array.h 00:01:52.833 TEST_HEADER include/spdk/bit_pool.h 00:01:52.833 TEST_HEADER include/spdk/blob_bdev.h 00:01:52.833 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:52.833 TEST_HEADER include/spdk/blob.h 00:01:52.833 TEST_HEADER include/spdk/blobfs.h 00:01:52.833 TEST_HEADER include/spdk/config.h 00:01:52.833 TEST_HEADER include/spdk/conf.h 00:01:53.097 TEST_HEADER include/spdk/cpuset.h 00:01:53.097 TEST_HEADER include/spdk/crc32.h 00:01:53.097 TEST_HEADER include/spdk/crc16.h 00:01:53.097 TEST_HEADER include/spdk/crc64.h 00:01:53.097 TEST_HEADER include/spdk/dif.h 00:01:53.097 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:53.097 TEST_HEADER include/spdk/dma.h 00:01:53.097 TEST_HEADER include/spdk/endian.h 00:01:53.097 TEST_HEADER include/spdk/env_dpdk.h 00:01:53.097 TEST_HEADER include/spdk/env.h 00:01:53.097 CC app/spdk_dd/spdk_dd.o 00:01:53.097 TEST_HEADER include/spdk/fd_group.h 00:01:53.098 TEST_HEADER include/spdk/event.h 00:01:53.098 TEST_HEADER include/spdk/fd.h 00:01:53.098 TEST_HEADER include/spdk/file.h 00:01:53.098 TEST_HEADER include/spdk/ftl.h 00:01:53.098 TEST_HEADER include/spdk/gpt_spec.h 00:01:53.098 TEST_HEADER include/spdk/hexlify.h 00:01:53.098 TEST_HEADER include/spdk/idxd.h 00:01:53.098 TEST_HEADER include/spdk/histogram_data.h 00:01:53.098 TEST_HEADER include/spdk/idxd_spec.h 00:01:53.098 TEST_HEADER include/spdk/init.h 00:01:53.098 CC app/vhost/vhost.o 00:01:53.098 TEST_HEADER include/spdk/json.h 00:01:53.098 TEST_HEADER include/spdk/ioat.h 00:01:53.098 TEST_HEADER include/spdk/iscsi_spec.h 00:01:53.098 TEST_HEADER include/spdk/ioat_spec.h 00:01:53.098 TEST_HEADER include/spdk/likely.h 00:01:53.098 TEST_HEADER include/spdk/log.h 00:01:53.098 TEST_HEADER include/spdk/memory.h 00:01:53.098 TEST_HEADER include/spdk/jsonrpc.h 00:01:53.098 CC app/nvmf_tgt/nvmf_main.o 00:01:53.098 TEST_HEADER include/spdk/lvol.h 00:01:53.098 TEST_HEADER include/spdk/nbd.h 00:01:53.098 TEST_HEADER include/spdk/mmio.h 00:01:53.098 TEST_HEADER include/spdk/notify.h 00:01:53.098 CC app/spdk_tgt/spdk_tgt.o 00:01:53.098 CC app/iscsi_tgt/iscsi_tgt.o 00:01:53.098 TEST_HEADER include/spdk/nvme.h 00:01:53.098 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:53.098 TEST_HEADER include/spdk/nvme_intel.h 00:01:53.098 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:53.098 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:53.098 TEST_HEADER include/spdk/nvme_spec.h 00:01:53.098 TEST_HEADER include/spdk/nvme_zns.h 00:01:53.098 TEST_HEADER include/spdk/nvmf.h 00:01:53.098 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:53.098 TEST_HEADER include/spdk/nvmf_spec.h 00:01:53.098 TEST_HEADER include/spdk/nvmf_transport.h 00:01:53.098 TEST_HEADER include/spdk/opal.h 00:01:53.098 TEST_HEADER include/spdk/opal_spec.h 00:01:53.098 TEST_HEADER include/spdk/pipe.h 00:01:53.098 TEST_HEADER include/spdk/pci_ids.h 00:01:53.098 TEST_HEADER include/spdk/queue.h 00:01:53.098 TEST_HEADER include/spdk/rpc.h 00:01:53.098 TEST_HEADER include/spdk/reduce.h 00:01:53.098 TEST_HEADER include/spdk/scheduler.h 00:01:53.098 TEST_HEADER include/spdk/scsi_spec.h 00:01:53.098 TEST_HEADER include/spdk/scsi.h 00:01:53.098 TEST_HEADER include/spdk/sock.h 00:01:53.098 TEST_HEADER include/spdk/string.h 00:01:53.098 TEST_HEADER include/spdk/stdinc.h 00:01:53.098 TEST_HEADER include/spdk/thread.h 00:01:53.098 TEST_HEADER include/spdk/trace.h 00:01:53.098 TEST_HEADER include/spdk/tree.h 00:01:53.098 TEST_HEADER include/spdk/ublk.h 00:01:53.098 TEST_HEADER include/spdk/trace_parser.h 00:01:53.098 TEST_HEADER include/spdk/util.h 00:01:53.098 TEST_HEADER include/spdk/uuid.h 00:01:53.098 TEST_HEADER include/spdk/version.h 00:01:53.098 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:53.098 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:53.098 TEST_HEADER include/spdk/vmd.h 00:01:53.098 TEST_HEADER include/spdk/xor.h 00:01:53.098 CC examples/accel/perf/accel_perf.o 00:01:53.098 TEST_HEADER include/spdk/vhost.h 00:01:53.098 TEST_HEADER include/spdk/zipf.h 00:01:53.098 CXX test/cpp_headers/accel.o 00:01:53.098 CXX test/cpp_headers/accel_module.o 00:01:53.098 CXX test/cpp_headers/barrier.o 00:01:53.098 CC test/nvme/aer/aer.o 00:01:53.098 CXX test/cpp_headers/assert.o 00:01:53.098 CXX test/cpp_headers/bdev.o 00:01:53.098 CXX test/cpp_headers/base64.o 00:01:53.098 CXX test/cpp_headers/bdev_zone.o 00:01:53.098 CC examples/util/zipf/zipf.o 00:01:53.098 CXX test/cpp_headers/bit_array.o 00:01:53.098 CXX test/cpp_headers/bdev_module.o 00:01:53.098 CXX test/cpp_headers/bit_pool.o 00:01:53.098 CXX test/cpp_headers/blob_bdev.o 00:01:53.098 CXX test/cpp_headers/blobfs_bdev.o 00:01:53.098 CC test/app/histogram_perf/histogram_perf.o 00:01:53.098 CC test/env/pci/pci_ut.o 00:01:53.098 CXX test/cpp_headers/blobfs.o 00:01:53.098 CXX test/cpp_headers/blob.o 00:01:53.098 CC examples/ioat/perf/perf.o 00:01:53.098 CXX test/cpp_headers/conf.o 00:01:53.098 CC examples/nvme/reconnect/reconnect.o 00:01:53.098 CC test/nvme/startup/startup.o 00:01:53.098 CXX test/cpp_headers/config.o 00:01:53.098 CC test/env/vtophys/vtophys.o 00:01:53.098 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:53.098 CXX test/cpp_headers/crc32.o 00:01:53.098 CC examples/ioat/verify/verify.o 00:01:53.098 CXX test/cpp_headers/crc64.o 00:01:53.098 CXX test/cpp_headers/crc16.o 00:01:53.098 CXX test/cpp_headers/cpuset.o 00:01:53.098 CC test/nvme/sgl/sgl.o 00:01:53.098 CXX test/cpp_headers/dif.o 00:01:53.098 CC test/nvme/fdp/fdp.o 00:01:53.098 CC test/nvme/overhead/overhead.o 00:01:53.098 CC test/nvme/e2edp/nvme_dp.o 00:01:53.098 CC test/env/memory/memory_ut.o 00:01:53.098 CC test/nvme/cuse/cuse.o 00:01:53.098 CC examples/idxd/perf/perf.o 00:01:53.098 CC examples/vmd/led/led.o 00:01:53.098 CC examples/vmd/lsvmd/lsvmd.o 00:01:53.098 CC test/nvme/simple_copy/simple_copy.o 00:01:53.098 CC test/nvme/compliance/nvme_compliance.o 00:01:53.098 CC examples/sock/hello_world/hello_sock.o 00:01:53.098 CC test/nvme/err_injection/err_injection.o 00:01:53.098 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:53.098 CC examples/nvme/abort/abort.o 00:01:53.098 CC test/app/jsoncat/jsoncat.o 00:01:53.098 CC app/fio/nvme/fio_plugin.o 00:01:53.098 CC test/event/event_perf/event_perf.o 00:01:53.098 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:53.098 CC test/nvme/connect_stress/connect_stress.o 00:01:53.098 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:53.098 CC test/event/reactor_perf/reactor_perf.o 00:01:53.098 CC test/nvme/reset/reset.o 00:01:53.098 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:53.098 CC test/thread/poller_perf/poller_perf.o 00:01:53.098 CC examples/nvme/hotplug/hotplug.o 00:01:53.098 CC examples/blob/hello_world/hello_blob.o 00:01:53.098 CC examples/nvme/hello_world/hello_world.o 00:01:53.098 CC test/nvme/reserve/reserve.o 00:01:53.098 CC test/event/reactor/reactor.o 00:01:53.098 CC test/event/app_repeat/app_repeat.o 00:01:53.098 CC test/nvme/boot_partition/boot_partition.o 00:01:53.098 CC test/nvme/fused_ordering/fused_ordering.o 00:01:53.098 CC examples/blob/cli/blobcli.o 00:01:53.098 CC examples/bdev/bdevperf/bdevperf.o 00:01:53.098 CC test/app/stub/stub.o 00:01:53.098 CC test/app/bdev_svc/bdev_svc.o 00:01:53.098 CC examples/nvmf/nvmf/nvmf.o 00:01:53.098 CC examples/bdev/hello_world/hello_bdev.o 00:01:53.098 CC app/fio/bdev/fio_plugin.o 00:01:53.098 CC examples/nvme/arbitration/arbitration.o 00:01:53.098 CC test/dma/test_dma/test_dma.o 00:01:53.098 CC test/blobfs/mkfs/mkfs.o 00:01:53.098 CC examples/thread/thread/thread_ex.o 00:01:53.098 CC test/accel/dif/dif.o 00:01:53.098 CC test/event/scheduler/scheduler.o 00:01:53.098 CC test/bdev/bdevio/bdevio.o 00:01:53.357 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:53.357 CC test/env/mem_callbacks/mem_callbacks.o 00:01:53.357 LINK spdk_nvme_discover 00:01:53.357 CC test/lvol/esnap/esnap.o 00:01:53.357 LINK spdk_lspci 00:01:53.357 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:53.357 LINK spdk_trace_record 00:01:53.357 LINK zipf 00:01:53.357 LINK lsvmd 00:01:53.357 LINK led 00:01:53.357 LINK histogram_perf 00:01:53.357 LINK vtophys 00:01:53.357 LINK spdk_tgt 00:01:53.357 LINK reactor 00:01:53.357 LINK reactor_perf 00:01:53.357 LINK iscsi_tgt 00:01:53.357 LINK rpc_client_test 00:01:53.357 LINK interrupt_tgt 00:01:53.357 LINK nvmf_tgt 00:01:53.357 LINK app_repeat 00:01:53.357 LINK err_injection 00:01:53.357 LINK ioat_perf 00:01:53.357 LINK pmr_persistence 00:01:53.357 LINK vhost 00:01:53.357 LINK bdev_svc 00:01:53.357 LINK cmb_copy 00:01:53.690 LINK jsoncat 00:01:53.690 CXX test/cpp_headers/dma.o 00:01:53.690 LINK simple_copy 00:01:53.690 CXX test/cpp_headers/endian.o 00:01:53.690 LINK startup 00:01:53.690 LINK event_perf 00:01:53.690 LINK hello_world 00:01:53.690 CXX test/cpp_headers/env_dpdk.o 00:01:53.690 CXX test/cpp_headers/env.o 00:01:53.690 CXX test/cpp_headers/event.o 00:01:53.690 CXX test/cpp_headers/fd_group.o 00:01:53.690 LINK nvme_dp 00:01:53.690 LINK poller_perf 00:01:53.690 CXX test/cpp_headers/fd.o 00:01:53.690 LINK env_dpdk_post_init 00:01:53.690 CXX test/cpp_headers/file.o 00:01:53.690 LINK spdk_dd 00:01:53.690 LINK boot_partition 00:01:53.690 LINK overhead 00:01:53.690 LINK stub 00:01:53.690 LINK scheduler 00:01:53.690 LINK thread 00:01:53.690 CXX test/cpp_headers/ftl.o 00:01:53.690 LINK hello_sock 00:01:53.690 LINK connect_stress 00:01:53.690 LINK doorbell_aers 00:01:53.690 LINK verify 00:01:53.690 LINK fused_ordering 00:01:53.690 LINK reserve 00:01:53.690 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:53.690 CXX test/cpp_headers/gpt_spec.o 00:01:53.690 LINK nvme_compliance 00:01:53.690 LINK mkfs 00:01:53.690 LINK hello_bdev 00:01:53.690 LINK reconnect 00:01:53.690 LINK sgl 00:01:53.690 CXX test/cpp_headers/hexlify.o 00:01:53.690 LINK spdk_trace 00:01:53.690 LINK arbitration 00:01:53.690 CXX test/cpp_headers/histogram_data.o 00:01:53.690 LINK hello_blob 00:01:53.690 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:53.690 LINK reset 00:01:53.690 CXX test/cpp_headers/idxd.o 00:01:53.690 CXX test/cpp_headers/idxd_spec.o 00:01:53.690 CXX test/cpp_headers/init.o 00:01:53.690 CXX test/cpp_headers/ioat.o 00:01:53.690 CXX test/cpp_headers/ioat_spec.o 00:01:53.690 CXX test/cpp_headers/iscsi_spec.o 00:01:53.690 CXX test/cpp_headers/jsonrpc.o 00:01:53.690 CXX test/cpp_headers/json.o 00:01:53.690 CXX test/cpp_headers/likely.o 00:01:53.690 CXX test/cpp_headers/log.o 00:01:53.690 CXX test/cpp_headers/lvol.o 00:01:53.690 CXX test/cpp_headers/memory.o 00:01:53.690 CXX test/cpp_headers/mmio.o 00:01:53.690 LINK hotplug 00:01:53.690 CXX test/cpp_headers/nbd.o 00:01:53.690 CXX test/cpp_headers/notify.o 00:01:53.690 CXX test/cpp_headers/nvme.o 00:01:53.690 LINK test_dma 00:01:53.690 LINK pci_ut 00:01:53.690 CXX test/cpp_headers/nvme_intel.o 00:01:53.690 LINK aer 00:01:53.690 CXX test/cpp_headers/nvme_ocssd.o 00:01:53.690 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:53.691 CXX test/cpp_headers/nvme_zns.o 00:01:53.691 CXX test/cpp_headers/nvme_spec.o 00:01:53.691 LINK fdp 00:01:53.691 CXX test/cpp_headers/nvmf_cmd.o 00:01:53.691 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:53.952 CXX test/cpp_headers/nvmf_spec.o 00:01:53.952 LINK dif 00:01:53.952 CXX test/cpp_headers/nvmf.o 00:01:53.952 CXX test/cpp_headers/nvmf_transport.o 00:01:53.952 LINK nvmf 00:01:53.952 CXX test/cpp_headers/opal.o 00:01:53.952 CXX test/cpp_headers/opal_spec.o 00:01:53.952 CXX test/cpp_headers/pci_ids.o 00:01:53.952 CXX test/cpp_headers/pipe.o 00:01:53.952 CXX test/cpp_headers/queue.o 00:01:53.952 CXX test/cpp_headers/reduce.o 00:01:53.952 CXX test/cpp_headers/rpc.o 00:01:53.952 CXX test/cpp_headers/scheduler.o 00:01:53.952 CXX test/cpp_headers/scsi.o 00:01:53.952 CXX test/cpp_headers/scsi_spec.o 00:01:53.952 CXX test/cpp_headers/sock.o 00:01:53.952 CXX test/cpp_headers/stdinc.o 00:01:53.952 CXX test/cpp_headers/string.o 00:01:53.952 CXX test/cpp_headers/thread.o 00:01:53.952 CXX test/cpp_headers/trace_parser.o 00:01:53.952 CXX test/cpp_headers/trace.o 00:01:53.952 CXX test/cpp_headers/tree.o 00:01:53.952 CXX test/cpp_headers/ublk.o 00:01:53.952 LINK idxd_perf 00:01:53.952 CXX test/cpp_headers/util.o 00:01:53.952 LINK abort 00:01:53.952 CXX test/cpp_headers/uuid.o 00:01:53.952 LINK blobcli 00:01:53.952 CXX test/cpp_headers/version.o 00:01:53.952 CXX test/cpp_headers/vfio_user_spec.o 00:01:53.952 CXX test/cpp_headers/vfio_user_pci.o 00:01:53.952 LINK spdk_bdev 00:01:53.952 CXX test/cpp_headers/vhost.o 00:01:53.952 LINK bdevio 00:01:53.952 LINK accel_perf 00:01:53.952 CXX test/cpp_headers/xor.o 00:01:53.952 CXX test/cpp_headers/vmd.o 00:01:53.952 CXX test/cpp_headers/zipf.o 00:01:54.210 LINK nvme_fuzz 00:01:54.210 LINK nvme_manage 00:01:54.210 LINK spdk_nvme 00:01:54.210 LINK spdk_nvme_perf 00:01:54.210 LINK bdevperf 00:01:54.210 LINK spdk_top 00:01:54.210 LINK mem_callbacks 00:01:54.469 LINK cuse 00:01:54.469 LINK spdk_nvme_identify 00:01:54.469 LINK vhost_fuzz 00:01:54.469 LINK memory_ut 00:01:55.038 LINK iscsi_fuzz 00:01:56.946 LINK esnap 00:01:57.206 00:01:57.206 real 0m39.463s 00:01:57.206 user 6m10.293s 00:01:57.206 sys 3m7.037s 00:01:57.206 17:27:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:57.206 17:27:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.206 ************************************ 00:01:57.206 END TEST make 00:01:57.206 ************************************ 00:01:57.206 17:27:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:57.206 17:27:18 -- nvmf/common.sh@7 -- # uname -s 00:01:57.206 17:27:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:57.206 17:27:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:57.206 17:27:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:57.206 17:27:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:57.206 17:27:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:57.206 17:27:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:57.206 17:27:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:57.206 17:27:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:57.206 17:27:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:57.206 17:27:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:57.206 17:27:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:57.206 17:27:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:57.206 17:27:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:57.500 17:27:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:57.500 17:27:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:57.500 17:27:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.500 17:27:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:57.500 17:27:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.500 17:27:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.500 17:27:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.500 17:27:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.500 17:27:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.500 17:27:18 -- paths/export.sh@5 -- # export PATH 00:01:57.500 17:27:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.500 17:27:18 -- nvmf/common.sh@46 -- # : 0 00:01:57.500 17:27:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:01:57.500 17:27:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:01:57.500 17:27:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:01:57.500 17:27:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:57.500 17:27:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:57.500 17:27:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:01:57.500 17:27:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:01:57.500 17:27:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:01:57.500 17:27:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:57.500 17:27:18 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.500 17:27:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.500 17:27:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.500 17:27:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.500 17:27:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.500 17:27:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.500 17:27:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.500 17:27:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.500 17:27:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.500 17:27:18 -- spdk/autotest.sh@48 -- # udevadm_pid=377593 00:01:57.500 17:27:18 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:57.500 17:27:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.500 17:27:18 -- spdk/autotest.sh@54 -- # echo 377595 00:01:57.500 17:27:18 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:57.500 17:27:18 -- spdk/autotest.sh@56 -- # echo 377596 00:01:57.500 17:27:18 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:57.500 17:27:18 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:01:57.500 17:27:18 -- spdk/autotest.sh@60 -- # echo 377597 00:01:57.500 17:27:18 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:57.500 17:27:18 -- spdk/autotest.sh@62 -- # echo 377598 00:01:57.500 17:27:18 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:57.500 17:27:18 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:57.500 17:27:18 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:01:57.500 17:27:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:01:57.500 17:27:18 -- common/autotest_common.sh@10 -- # set +x 00:01:57.500 17:27:18 -- spdk/autotest.sh@70 -- # create_test_list 00:01:57.500 17:27:18 -- common/autotest_common.sh@736 -- # xtrace_disable 00:01:57.500 17:27:18 -- common/autotest_common.sh@10 -- # set +x 00:01:57.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:01:57.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:01:57.500 17:27:18 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:57.500 17:27:18 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.500 17:27:18 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.500 17:27:18 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.500 17:27:18 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.500 17:27:18 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:01:57.500 17:27:18 -- common/autotest_common.sh@1440 -- # uname 00:01:57.500 17:27:18 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:01:57.500 17:27:18 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:01:57.500 17:27:18 -- common/autotest_common.sh@1460 -- # uname 00:01:57.500 17:27:18 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:01:57.500 17:27:18 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:01:57.500 17:27:18 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:01:57.500 17:27:18 -- spdk/autotest.sh@83 -- # hash lcov 00:01:57.500 17:27:18 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:57.500 17:27:18 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:01:57.500 --rc lcov_branch_coverage=1 00:01:57.500 --rc lcov_function_coverage=1 00:01:57.500 --rc genhtml_branch_coverage=1 00:01:57.500 --rc genhtml_function_coverage=1 00:01:57.500 --rc genhtml_legend=1 00:01:57.500 --rc geninfo_all_blocks=1 00:01:57.500 ' 00:01:57.500 17:27:18 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:01:57.500 --rc lcov_branch_coverage=1 00:01:57.500 --rc lcov_function_coverage=1 00:01:57.500 --rc genhtml_branch_coverage=1 00:01:57.500 --rc genhtml_function_coverage=1 00:01:57.500 --rc genhtml_legend=1 00:01:57.500 --rc geninfo_all_blocks=1 00:01:57.500 ' 00:01:57.500 17:27:18 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:01:57.500 --rc lcov_branch_coverage=1 00:01:57.500 --rc lcov_function_coverage=1 00:01:57.500 --rc genhtml_branch_coverage=1 00:01:57.500 --rc genhtml_function_coverage=1 00:01:57.500 --rc genhtml_legend=1 00:01:57.500 --rc geninfo_all_blocks=1 00:01:57.500 --no-external' 00:01:57.500 17:27:18 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:01:57.500 --rc lcov_branch_coverage=1 00:01:57.500 --rc lcov_function_coverage=1 00:01:57.500 --rc genhtml_branch_coverage=1 00:01:57.500 --rc genhtml_function_coverage=1 00:01:57.500 --rc genhtml_legend=1 00:01:57.500 --rc geninfo_all_blocks=1 00:01:57.500 --no-external' 00:01:57.500 17:27:18 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:57.500 lcov: LCOV version 1.14 00:01:57.500 17:27:18 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:00.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:00.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:00.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:00.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:00.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:00.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.133 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.042 17:27:41 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:20.042 17:27:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:20.042 17:27:41 -- common/autotest_common.sh@10 -- # set +x 00:02:20.042 17:27:41 -- spdk/autotest.sh@102 -- # rm -f 00:02:20.042 17:27:41 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.582 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:22.582 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:22.582 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:22.583 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:22.583 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:22.583 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:22.583 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.583 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.842 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.842 17:27:44 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:22.842 17:27:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:22.842 17:27:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:22.842 17:27:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:22.842 17:27:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:22.842 17:27:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:22.842 17:27:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:22.842 17:27:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.842 17:27:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:22.842 17:27:44 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:22.842 17:27:44 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:22.842 17:27:44 -- spdk/autotest.sh@121 -- # grep -v p 00:02:22.842 17:27:44 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:22.842 17:27:44 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:22.843 17:27:44 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:22.843 17:27:44 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:22.843 17:27:44 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:22.843 No valid GPT data, bailing 00:02:22.843 17:27:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:23.102 17:27:44 -- scripts/common.sh@393 -- # pt= 00:02:23.102 17:27:44 -- scripts/common.sh@394 -- # return 1 00:02:23.102 17:27:44 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:23.102 1+0 records in 00:02:23.102 1+0 records out 00:02:23.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441905 s, 237 MB/s 00:02:23.102 17:27:44 -- spdk/autotest.sh@129 -- # sync 00:02:23.102 17:27:44 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:23.102 17:27:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:23.102 17:27:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:28.375 17:27:49 -- spdk/autotest.sh@135 -- # uname -s 00:02:28.375 17:27:49 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:28.375 17:27:49 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.375 17:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:28.375 17:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:28.375 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:02:28.375 ************************************ 00:02:28.375 START TEST setup.sh 00:02:28.375 ************************************ 00:02:28.375 17:27:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.375 * Looking for test storage... 00:02:28.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:28.375 17:27:49 -- setup/test-setup.sh@10 -- # uname -s 00:02:28.375 17:27:49 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:28.375 17:27:49 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:28.375 17:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:28.375 17:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:28.375 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:02:28.375 ************************************ 00:02:28.375 START TEST acl 00:02:28.375 ************************************ 00:02:28.375 17:27:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:28.375 * Looking for test storage... 00:02:28.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:28.375 17:27:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:28.375 17:27:49 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:28.375 17:27:49 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:28.375 17:27:49 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:28.375 17:27:49 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:28.375 17:27:49 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:28.375 17:27:49 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:28.375 17:27:49 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.375 17:27:49 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:28.375 17:27:49 -- setup/acl.sh@12 -- # devs=() 00:02:28.375 17:27:49 -- setup/acl.sh@12 -- # declare -a devs 00:02:28.375 17:27:49 -- setup/acl.sh@13 -- # drivers=() 00:02:28.375 17:27:49 -- setup/acl.sh@13 -- # declare -A drivers 00:02:28.375 17:27:49 -- setup/acl.sh@51 -- # setup reset 00:02:28.375 17:27:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.375 17:27:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.665 17:27:52 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:31.665 17:27:52 -- setup/acl.sh@16 -- # local dev driver 00:02:31.665 17:27:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.665 17:27:52 -- setup/acl.sh@15 -- # setup output status 00:02:31.665 17:27:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.665 17:27:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:34.237 Hugepages 00:02:34.237 node hugesize free / total 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 00:02:34.237 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:34.237 17:27:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.237 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.237 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:34.237 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.238 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.238 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.238 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:34.238 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.238 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.238 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.238 17:27:55 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:34.238 17:27:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.238 17:27:55 -- setup/acl.sh@20 -- # continue 00:02:34.238 17:27:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.238 17:27:55 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:34.238 17:27:55 -- setup/acl.sh@54 -- # run_test denied denied 00:02:34.238 17:27:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:34.238 17:27:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:34.238 17:27:55 -- common/autotest_common.sh@10 -- # set +x 00:02:34.238 ************************************ 00:02:34.238 START TEST denied 00:02:34.238 ************************************ 00:02:34.238 17:27:55 -- common/autotest_common.sh@1104 -- # denied 00:02:34.238 17:27:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:34.238 17:27:55 -- setup/acl.sh@38 -- # setup output config 00:02:34.238 17:27:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:34.238 17:27:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.238 17:27:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:36.775 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:36.775 17:27:58 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:36.775 17:27:58 -- setup/acl.sh@28 -- # local dev driver 00:02:36.775 17:27:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:36.775 17:27:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:36.775 17:27:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:36.775 17:27:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:36.775 17:27:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:36.775 17:27:58 -- setup/acl.sh@41 -- # setup reset 00:02:36.775 17:27:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.775 17:27:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.966 00:02:40.966 real 0m6.596s 00:02:40.966 user 0m2.169s 00:02:40.966 sys 0m3.687s 00:02:40.966 17:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.966 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:02:40.966 ************************************ 00:02:40.966 END TEST denied 00:02:40.966 ************************************ 00:02:40.966 17:28:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:40.966 17:28:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:40.966 17:28:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:40.966 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:02:40.966 ************************************ 00:02:40.966 START TEST allowed 00:02:40.966 ************************************ 00:02:40.966 17:28:02 -- common/autotest_common.sh@1104 -- # allowed 00:02:40.966 17:28:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:40.966 17:28:02 -- setup/acl.sh@45 -- # setup output config 00:02:40.966 17:28:02 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:40.966 17:28:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.966 17:28:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.262 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.262 17:28:05 -- setup/acl.sh@47 -- # verify 00:02:44.262 17:28:05 -- setup/acl.sh@28 -- # local dev driver 00:02:44.263 17:28:05 -- setup/acl.sh@48 -- # setup reset 00:02:44.263 17:28:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.263 17:28:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.552 00:02:47.552 real 0m6.510s 00:02:47.552 user 0m1.969s 00:02:47.552 sys 0m3.670s 00:02:47.552 17:28:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.553 17:28:08 -- common/autotest_common.sh@10 -- # set +x 00:02:47.553 ************************************ 00:02:47.553 END TEST allowed 00:02:47.553 ************************************ 00:02:47.553 00:02:47.553 real 0m19.037s 00:02:47.553 user 0m6.378s 00:02:47.553 sys 0m11.270s 00:02:47.553 17:28:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.553 17:28:08 -- common/autotest_common.sh@10 -- # set +x 00:02:47.553 ************************************ 00:02:47.553 END TEST acl 00:02:47.553 ************************************ 00:02:47.553 17:28:08 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.553 17:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.553 17:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.553 17:28:08 -- common/autotest_common.sh@10 -- # set +x 00:02:47.553 ************************************ 00:02:47.553 START TEST hugepages 00:02:47.553 ************************************ 00:02:47.553 17:28:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.553 * Looking for test storage... 00:02:47.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:47.553 17:28:08 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.553 17:28:08 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.553 17:28:08 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.553 17:28:08 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.553 17:28:08 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.553 17:28:08 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.553 17:28:08 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.553 17:28:08 -- setup/common.sh@18 -- # local node= 00:02:47.553 17:28:08 -- setup/common.sh@19 -- # local var val 00:02:47.553 17:28:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.553 17:28:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.553 17:28:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.553 17:28:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.553 17:28:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.553 17:28:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168346496 kB' 'MemAvailable: 171579444 kB' 'Buffers: 3896 kB' 'Cached: 14634808 kB' 'SwapCached: 0 kB' 'Active: 11484544 kB' 'Inactive: 3694072 kB' 'Active(anon): 11066588 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543268 kB' 'Mapped: 217208 kB' 'Shmem: 10526676 kB' 'KReclaimable: 530268 kB' 'Slab: 1183860 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 653592 kB' 'KernelStack: 20656 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12606488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.553 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.553 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # continue 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.554 17:28:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.554 17:28:08 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.554 17:28:08 -- setup/common.sh@33 -- # echo 2048 00:02:47.554 17:28:08 -- setup/common.sh@33 -- # return 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.554 17:28:08 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.554 17:28:08 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.554 17:28:08 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.554 17:28:08 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.554 17:28:08 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.554 17:28:08 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.554 17:28:08 -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.554 17:28:08 -- setup/hugepages.sh@27 -- # local node 00:02:47.554 17:28:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.554 17:28:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.554 17:28:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.554 17:28:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.554 17:28:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.554 17:28:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.554 17:28:08 -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.554 17:28:08 -- setup/hugepages.sh@37 -- # local node hp 00:02:47.554 17:28:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.554 17:28:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.554 17:28:08 -- setup/hugepages.sh@41 -- # echo 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.554 17:28:08 -- setup/hugepages.sh@41 -- # echo 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.554 17:28:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.554 17:28:08 -- setup/hugepages.sh@41 -- # echo 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.554 17:28:08 -- setup/hugepages.sh@41 -- # echo 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.554 17:28:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.554 17:28:08 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.554 17:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.554 17:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.554 17:28:08 -- common/autotest_common.sh@10 -- # set +x 00:02:47.554 ************************************ 00:02:47.554 START TEST default_setup 00:02:47.554 ************************************ 00:02:47.554 17:28:08 -- common/autotest_common.sh@1104 -- # default_setup 00:02:47.554 17:28:08 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.554 17:28:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.554 17:28:08 -- setup/hugepages.sh@51 -- # shift 00:02:47.554 17:28:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.554 17:28:08 -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.554 17:28:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.554 17:28:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.554 17:28:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.554 17:28:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.554 17:28:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.554 17:28:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.554 17:28:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.554 17:28:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.554 17:28:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.555 17:28:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.555 17:28:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.555 17:28:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.555 17:28:08 -- setup/hugepages.sh@73 -- # return 0 00:02:47.555 17:28:08 -- setup/hugepages.sh@137 -- # setup output 00:02:47.555 17:28:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.555 17:28:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.093 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.093 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:51.031 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.294 17:28:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:51.294 17:28:12 -- setup/hugepages.sh@89 -- # local node 00:02:51.294 17:28:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.294 17:28:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.294 17:28:12 -- setup/hugepages.sh@92 -- # local surp 00:02:51.294 17:28:12 -- setup/hugepages.sh@93 -- # local resv 00:02:51.294 17:28:12 -- setup/hugepages.sh@94 -- # local anon 00:02:51.294 17:28:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.294 17:28:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.294 17:28:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.294 17:28:12 -- setup/common.sh@18 -- # local node= 00:02:51.294 17:28:12 -- setup/common.sh@19 -- # local var val 00:02:51.294 17:28:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.294 17:28:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.294 17:28:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.294 17:28:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.294 17:28:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.294 17:28:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170512124 kB' 'MemAvailable: 173745072 kB' 'Buffers: 3896 kB' 'Cached: 14634912 kB' 'SwapCached: 0 kB' 'Active: 11500236 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559112 kB' 'Mapped: 217620 kB' 'Shmem: 10526780 kB' 'KReclaimable: 530268 kB' 'Slab: 1182720 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652452 kB' 'KernelStack: 20656 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12644136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317032 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.294 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.294 17:28:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.295 17:28:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.295 17:28:12 -- setup/common.sh@33 -- # echo 0 00:02:51.295 17:28:12 -- setup/common.sh@33 -- # return 0 00:02:51.295 17:28:12 -- setup/hugepages.sh@97 -- # anon=0 00:02:51.295 17:28:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.295 17:28:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.295 17:28:12 -- setup/common.sh@18 -- # local node= 00:02:51.295 17:28:12 -- setup/common.sh@19 -- # local var val 00:02:51.295 17:28:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.295 17:28:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.295 17:28:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.295 17:28:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.295 17:28:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.295 17:28:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.295 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170515112 kB' 'MemAvailable: 173748060 kB' 'Buffers: 3896 kB' 'Cached: 14634920 kB' 'SwapCached: 0 kB' 'Active: 11500776 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082820 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559604 kB' 'Mapped: 217164 kB' 'Shmem: 10526788 kB' 'KReclaimable: 530268 kB' 'Slab: 1182712 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652444 kB' 'KernelStack: 20848 kB' 'PageTables: 9728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.296 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.296 17:28:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.297 17:28:12 -- setup/common.sh@33 -- # echo 0 00:02:51.297 17:28:12 -- setup/common.sh@33 -- # return 0 00:02:51.297 17:28:12 -- setup/hugepages.sh@99 -- # surp=0 00:02:51.297 17:28:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.297 17:28:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.297 17:28:12 -- setup/common.sh@18 -- # local node= 00:02:51.297 17:28:12 -- setup/common.sh@19 -- # local var val 00:02:51.297 17:28:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.297 17:28:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.297 17:28:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.297 17:28:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.297 17:28:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.297 17:28:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170512796 kB' 'MemAvailable: 173745744 kB' 'Buffers: 3896 kB' 'Cached: 14634936 kB' 'SwapCached: 0 kB' 'Active: 11499492 kB' 'Inactive: 3694072 kB' 'Active(anon): 11081536 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558216 kB' 'Mapped: 217164 kB' 'Shmem: 10526804 kB' 'KReclaimable: 530268 kB' 'Slab: 1182600 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652332 kB' 'KernelStack: 20896 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.297 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.297 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.298 17:28:12 -- setup/common.sh@33 -- # echo 0 00:02:51.298 17:28:12 -- setup/common.sh@33 -- # return 0 00:02:51.298 17:28:12 -- setup/hugepages.sh@100 -- # resv=0 00:02:51.298 17:28:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.298 nr_hugepages=1024 00:02:51.298 17:28:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.298 resv_hugepages=0 00:02:51.298 17:28:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.298 surplus_hugepages=0 00:02:51.298 17:28:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.298 anon_hugepages=0 00:02:51.298 17:28:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.298 17:28:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.298 17:28:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.298 17:28:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.298 17:28:12 -- setup/common.sh@18 -- # local node= 00:02:51.298 17:28:12 -- setup/common.sh@19 -- # local var val 00:02:51.298 17:28:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.298 17:28:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.298 17:28:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.298 17:28:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.298 17:28:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.298 17:28:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.298 17:28:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170511520 kB' 'MemAvailable: 173744468 kB' 'Buffers: 3896 kB' 'Cached: 14634940 kB' 'SwapCached: 0 kB' 'Active: 11500392 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082436 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559508 kB' 'Mapped: 217172 kB' 'Shmem: 10526808 kB' 'KReclaimable: 530268 kB' 'Slab: 1182600 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652332 kB' 'KernelStack: 21072 kB' 'PageTables: 10436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.298 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.298 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.299 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.299 17:28:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.300 17:28:12 -- setup/common.sh@33 -- # echo 1024 00:02:51.300 17:28:12 -- setup/common.sh@33 -- # return 0 00:02:51.300 17:28:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.300 17:28:12 -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.300 17:28:12 -- setup/hugepages.sh@27 -- # local node 00:02:51.300 17:28:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.300 17:28:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.300 17:28:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.300 17:28:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.300 17:28:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.300 17:28:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.300 17:28:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.300 17:28:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.300 17:28:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.300 17:28:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.300 17:28:12 -- setup/common.sh@18 -- # local node=0 00:02:51.300 17:28:12 -- setup/common.sh@19 -- # local var val 00:02:51.300 17:28:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.300 17:28:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.300 17:28:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.300 17:28:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.300 17:28:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.300 17:28:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91605628 kB' 'MemUsed: 6010000 kB' 'SwapCached: 0 kB' 'Active: 2236812 kB' 'Inactive: 216956 kB' 'Active(anon): 2074988 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2289820 kB' 'Mapped: 64844 kB' 'AnonPages: 166952 kB' 'Shmem: 1911040 kB' 'KernelStack: 11768 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 652648 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 297672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.300 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.300 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # continue 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.301 17:28:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.301 17:28:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.301 17:28:12 -- setup/common.sh@33 -- # echo 0 00:02:51.301 17:28:12 -- setup/common.sh@33 -- # return 0 00:02:51.301 17:28:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.301 17:28:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.301 17:28:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.301 17:28:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.301 17:28:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.301 node0=1024 expecting 1024 00:02:51.301 17:28:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.301 00:02:51.301 real 0m3.941s 00:02:51.301 user 0m1.237s 00:02:51.301 sys 0m1.894s 00:02:51.301 17:28:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.301 17:28:12 -- common/autotest_common.sh@10 -- # set +x 00:02:51.301 ************************************ 00:02:51.301 END TEST default_setup 00:02:51.301 ************************************ 00:02:51.301 17:28:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:51.301 17:28:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:51.301 17:28:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:51.301 17:28:12 -- common/autotest_common.sh@10 -- # set +x 00:02:51.301 ************************************ 00:02:51.301 START TEST per_node_1G_alloc 00:02:51.301 ************************************ 00:02:51.301 17:28:12 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:51.301 17:28:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:51.301 17:28:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:51.301 17:28:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.301 17:28:12 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:51.301 17:28:12 -- setup/hugepages.sh@51 -- # shift 00:02:51.301 17:28:12 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:51.301 17:28:12 -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.301 17:28:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.301 17:28:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.301 17:28:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:51.301 17:28:12 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:51.301 17:28:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.301 17:28:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.301 17:28:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.301 17:28:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.301 17:28:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.301 17:28:12 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:51.301 17:28:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.301 17:28:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.301 17:28:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.301 17:28:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.301 17:28:12 -- setup/hugepages.sh@73 -- # return 0 00:02:51.301 17:28:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:51.301 17:28:12 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:51.301 17:28:12 -- setup/hugepages.sh@146 -- # setup output 00:02:51.301 17:28:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.301 17:28:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.598 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.598 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.598 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.598 17:28:15 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:54.598 17:28:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:54.598 17:28:15 -- setup/hugepages.sh@89 -- # local node 00:02:54.598 17:28:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.598 17:28:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.598 17:28:15 -- setup/hugepages.sh@92 -- # local surp 00:02:54.598 17:28:15 -- setup/hugepages.sh@93 -- # local resv 00:02:54.598 17:28:15 -- setup/hugepages.sh@94 -- # local anon 00:02:54.598 17:28:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.598 17:28:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.598 17:28:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.598 17:28:15 -- setup/common.sh@18 -- # local node= 00:02:54.598 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.598 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.598 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.598 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.598 17:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.598 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.598 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170552700 kB' 'MemAvailable: 173785648 kB' 'Buffers: 3896 kB' 'Cached: 14635020 kB' 'SwapCached: 0 kB' 'Active: 11501388 kB' 'Inactive: 3694072 kB' 'Active(anon): 11083432 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559288 kB' 'Mapped: 217280 kB' 'Shmem: 10526888 kB' 'KReclaimable: 530268 kB' 'Slab: 1182408 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652140 kB' 'KernelStack: 20784 kB' 'PageTables: 10092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317448 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.598 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.598 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.599 17:28:15 -- setup/common.sh@33 -- # echo 0 00:02:54.599 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.599 17:28:15 -- setup/hugepages.sh@97 -- # anon=0 00:02:54.599 17:28:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.599 17:28:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.599 17:28:15 -- setup/common.sh@18 -- # local node= 00:02:54.599 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.599 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.599 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.599 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.599 17:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.599 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.599 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170551312 kB' 'MemAvailable: 173784260 kB' 'Buffers: 3896 kB' 'Cached: 14635024 kB' 'SwapCached: 0 kB' 'Active: 11501092 kB' 'Inactive: 3694072 kB' 'Active(anon): 11083136 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558968 kB' 'Mapped: 217252 kB' 'Shmem: 10526892 kB' 'KReclaimable: 530268 kB' 'Slab: 1182380 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652112 kB' 'KernelStack: 20800 kB' 'PageTables: 9916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317448 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.599 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.599 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.600 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.600 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.601 17:28:15 -- setup/common.sh@33 -- # echo 0 00:02:54.601 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.601 17:28:15 -- setup/hugepages.sh@99 -- # surp=0 00:02:54.601 17:28:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.601 17:28:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.601 17:28:15 -- setup/common.sh@18 -- # local node= 00:02:54.601 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.601 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.601 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.601 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.601 17:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.601 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.601 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170550588 kB' 'MemAvailable: 173783536 kB' 'Buffers: 3896 kB' 'Cached: 14635024 kB' 'SwapCached: 0 kB' 'Active: 11500456 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082500 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558784 kB' 'Mapped: 217168 kB' 'Shmem: 10526892 kB' 'KReclaimable: 530268 kB' 'Slab: 1182372 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652104 kB' 'KernelStack: 20800 kB' 'PageTables: 9712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12635920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317480 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.601 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.601 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.602 17:28:15 -- setup/common.sh@33 -- # echo 0 00:02:54.602 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.602 17:28:15 -- setup/hugepages.sh@100 -- # resv=0 00:02:54.602 17:28:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.602 nr_hugepages=1024 00:02:54.602 17:28:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.602 resv_hugepages=0 00:02:54.602 17:28:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.602 surplus_hugepages=0 00:02:54.602 17:28:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.602 anon_hugepages=0 00:02:54.602 17:28:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.602 17:28:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.602 17:28:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.602 17:28:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.602 17:28:15 -- setup/common.sh@18 -- # local node= 00:02:54.602 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.602 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.602 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.602 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.602 17:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.602 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.602 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170548916 kB' 'MemAvailable: 173781864 kB' 'Buffers: 3896 kB' 'Cached: 14635052 kB' 'SwapCached: 0 kB' 'Active: 11500184 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082228 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558468 kB' 'Mapped: 217176 kB' 'Shmem: 10526920 kB' 'KReclaimable: 530268 kB' 'Slab: 1182372 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 652104 kB' 'KernelStack: 20848 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12631536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317448 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.602 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.602 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.603 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.603 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.604 17:28:15 -- setup/common.sh@33 -- # echo 1024 00:02:54.604 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.604 17:28:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.604 17:28:15 -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.604 17:28:15 -- setup/hugepages.sh@27 -- # local node 00:02:54.604 17:28:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.604 17:28:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.604 17:28:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.604 17:28:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.604 17:28:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.604 17:28:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.604 17:28:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.604 17:28:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.604 17:28:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.604 17:28:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.604 17:28:15 -- setup/common.sh@18 -- # local node=0 00:02:54.604 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.604 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.604 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.604 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.604 17:28:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.604 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.604 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92706220 kB' 'MemUsed: 4909408 kB' 'SwapCached: 0 kB' 'Active: 2235964 kB' 'Inactive: 216956 kB' 'Active(anon): 2074140 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2289840 kB' 'Mapped: 64840 kB' 'AnonPages: 166228 kB' 'Shmem: 1911060 kB' 'KernelStack: 11480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 652348 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 297372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.604 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.604 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@33 -- # echo 0 00:02:54.605 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.605 17:28:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.605 17:28:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.605 17:28:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.605 17:28:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.605 17:28:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.605 17:28:15 -- setup/common.sh@18 -- # local node=1 00:02:54.605 17:28:15 -- setup/common.sh@19 -- # local var val 00:02:54.605 17:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.605 17:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.605 17:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.605 17:28:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.605 17:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.605 17:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77847024 kB' 'MemUsed: 15918484 kB' 'SwapCached: 0 kB' 'Active: 9263728 kB' 'Inactive: 3477116 kB' 'Active(anon): 9007596 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477116 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12349136 kB' 'Mapped: 152324 kB' 'AnonPages: 391792 kB' 'Shmem: 8615888 kB' 'KernelStack: 9176 kB' 'PageTables: 5276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175292 kB' 'Slab: 530088 kB' 'SReclaimable: 175292 kB' 'SUnreclaim: 354796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.605 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.605 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # continue 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 17:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 17:28:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.606 17:28:15 -- setup/common.sh@33 -- # echo 0 00:02:54.606 17:28:15 -- setup/common.sh@33 -- # return 0 00:02:54.606 17:28:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.606 17:28:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.606 17:28:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.606 17:28:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.606 node0=512 expecting 512 00:02:54.606 17:28:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.606 17:28:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.606 17:28:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.606 17:28:15 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:54.606 node1=512 expecting 512 00:02:54.606 17:28:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:54.606 00:02:54.606 real 0m2.916s 00:02:54.606 user 0m1.167s 00:02:54.606 sys 0m1.807s 00:02:54.606 17:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.606 17:28:15 -- common/autotest_common.sh@10 -- # set +x 00:02:54.606 ************************************ 00:02:54.606 END TEST per_node_1G_alloc 00:02:54.606 ************************************ 00:02:54.606 17:28:15 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:54.606 17:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:54.606 17:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:54.606 17:28:15 -- common/autotest_common.sh@10 -- # set +x 00:02:54.606 ************************************ 00:02:54.606 START TEST even_2G_alloc 00:02:54.606 ************************************ 00:02:54.606 17:28:15 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:02:54.606 17:28:15 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:54.606 17:28:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.606 17:28:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.606 17:28:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.606 17:28:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.606 17:28:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.606 17:28:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.606 17:28:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.606 17:28:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.606 17:28:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.606 17:28:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.606 17:28:15 -- setup/hugepages.sh@83 -- # : 512 00:02:54.606 17:28:15 -- setup/hugepages.sh@84 -- # : 1 00:02:54.606 17:28:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.606 17:28:15 -- setup/hugepages.sh@83 -- # : 0 00:02:54.606 17:28:15 -- setup/hugepages.sh@84 -- # : 0 00:02:54.606 17:28:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.606 17:28:15 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:54.606 17:28:15 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:54.606 17:28:15 -- setup/hugepages.sh@153 -- # setup output 00:02:54.606 17:28:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.606 17:28:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.201 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.201 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.201 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.201 17:28:18 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:57.201 17:28:18 -- setup/hugepages.sh@89 -- # local node 00:02:57.201 17:28:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.201 17:28:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.201 17:28:18 -- setup/hugepages.sh@92 -- # local surp 00:02:57.201 17:28:18 -- setup/hugepages.sh@93 -- # local resv 00:02:57.201 17:28:18 -- setup/hugepages.sh@94 -- # local anon 00:02:57.201 17:28:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.201 17:28:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.201 17:28:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.201 17:28:18 -- setup/common.sh@18 -- # local node= 00:02:57.201 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.201 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.201 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.201 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.201 17:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.201 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.201 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.201 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.201 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170584044 kB' 'MemAvailable: 173816992 kB' 'Buffers: 3896 kB' 'Cached: 14635152 kB' 'SwapCached: 0 kB' 'Active: 11495760 kB' 'Inactive: 3694072 kB' 'Active(anon): 11077804 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553976 kB' 'Mapped: 216184 kB' 'Shmem: 10527020 kB' 'KReclaimable: 530268 kB' 'Slab: 1181900 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 651632 kB' 'KernelStack: 20464 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12607664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.202 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.202 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.203 17:28:18 -- setup/common.sh@33 -- # echo 0 00:02:57.203 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.203 17:28:18 -- setup/hugepages.sh@97 -- # anon=0 00:02:57.203 17:28:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.203 17:28:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.203 17:28:18 -- setup/common.sh@18 -- # local node= 00:02:57.203 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.203 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.203 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.203 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.203 17:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.203 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.203 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170584196 kB' 'MemAvailable: 173817144 kB' 'Buffers: 3896 kB' 'Cached: 14635156 kB' 'SwapCached: 0 kB' 'Active: 11495224 kB' 'Inactive: 3694072 kB' 'Active(anon): 11077268 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553504 kB' 'Mapped: 216152 kB' 'Shmem: 10527024 kB' 'KReclaimable: 530268 kB' 'Slab: 1181932 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 651664 kB' 'KernelStack: 20528 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12607676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.203 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.203 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.204 17:28:18 -- setup/common.sh@33 -- # echo 0 00:02:57.204 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.204 17:28:18 -- setup/hugepages.sh@99 -- # surp=0 00:02:57.204 17:28:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.204 17:28:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.204 17:28:18 -- setup/common.sh@18 -- # local node= 00:02:57.204 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.204 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.204 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.204 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.204 17:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.204 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.204 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170583440 kB' 'MemAvailable: 173816388 kB' 'Buffers: 3896 kB' 'Cached: 14635168 kB' 'SwapCached: 0 kB' 'Active: 11495236 kB' 'Inactive: 3694072 kB' 'Active(anon): 11077280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553508 kB' 'Mapped: 216152 kB' 'Shmem: 10527036 kB' 'KReclaimable: 530268 kB' 'Slab: 1181932 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 651664 kB' 'KernelStack: 20528 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12607692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.204 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.204 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.205 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.205 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.206 17:28:18 -- setup/common.sh@33 -- # echo 0 00:02:57.206 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.206 17:28:18 -- setup/hugepages.sh@100 -- # resv=0 00:02:57.206 17:28:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.206 nr_hugepages=1024 00:02:57.206 17:28:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.206 resv_hugepages=0 00:02:57.206 17:28:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.206 surplus_hugepages=0 00:02:57.206 17:28:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.206 anon_hugepages=0 00:02:57.206 17:28:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.206 17:28:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.206 17:28:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.206 17:28:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.206 17:28:18 -- setup/common.sh@18 -- # local node= 00:02:57.206 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.206 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.206 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.206 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.206 17:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.206 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.206 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170583440 kB' 'MemAvailable: 173816388 kB' 'Buffers: 3896 kB' 'Cached: 14635168 kB' 'SwapCached: 0 kB' 'Active: 11495236 kB' 'Inactive: 3694072 kB' 'Active(anon): 11077280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553508 kB' 'Mapped: 216152 kB' 'Shmem: 10527036 kB' 'KReclaimable: 530268 kB' 'Slab: 1181932 kB' 'SReclaimable: 530268 kB' 'SUnreclaim: 651664 kB' 'KernelStack: 20528 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12607704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.206 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.206 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.207 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.207 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.207 17:28:18 -- setup/common.sh@33 -- # echo 1024 00:02:57.208 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.208 17:28:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.208 17:28:18 -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.208 17:28:18 -- setup/hugepages.sh@27 -- # local node 00:02:57.208 17:28:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.208 17:28:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.208 17:28:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.208 17:28:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.208 17:28:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.208 17:28:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.208 17:28:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.208 17:28:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.208 17:28:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.208 17:28:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.208 17:28:18 -- setup/common.sh@18 -- # local node=0 00:02:57.208 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.208 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.208 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.208 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.208 17:28:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.208 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.208 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92714688 kB' 'MemUsed: 4900940 kB' 'SwapCached: 0 kB' 'Active: 2233756 kB' 'Inactive: 216956 kB' 'Active(anon): 2071932 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2289868 kB' 'Mapped: 64496 kB' 'AnonPages: 164076 kB' 'Shmem: 1911088 kB' 'KernelStack: 11368 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 651784 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 296808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.208 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.208 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@33 -- # echo 0 00:02:57.209 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.209 17:28:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.209 17:28:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.209 17:28:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.209 17:28:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.209 17:28:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.209 17:28:18 -- setup/common.sh@18 -- # local node=1 00:02:57.209 17:28:18 -- setup/common.sh@19 -- # local var val 00:02:57.209 17:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.209 17:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.209 17:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.209 17:28:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.209 17:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.209 17:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.209 17:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77867996 kB' 'MemUsed: 15897512 kB' 'SwapCached: 0 kB' 'Active: 9261628 kB' 'Inactive: 3477116 kB' 'Active(anon): 9005496 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477116 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12349196 kB' 'Mapped: 151656 kB' 'AnonPages: 389580 kB' 'Shmem: 8615948 kB' 'KernelStack: 9144 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175292 kB' 'Slab: 530148 kB' 'SReclaimable: 175292 kB' 'SUnreclaim: 354856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.209 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.209 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # continue 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.210 17:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.210 17:28:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.210 17:28:18 -- setup/common.sh@33 -- # echo 0 00:02:57.210 17:28:18 -- setup/common.sh@33 -- # return 0 00:02:57.210 17:28:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.210 17:28:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.210 17:28:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.210 17:28:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:57.210 node0=512 expecting 512 00:02:57.210 17:28:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.210 17:28:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.210 17:28:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.210 17:28:18 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:57.210 node1=512 expecting 512 00:02:57.210 17:28:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:57.210 00:02:57.210 real 0m2.927s 00:02:57.210 user 0m1.175s 00:02:57.210 sys 0m1.822s 00:02:57.210 17:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.210 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:02:57.210 ************************************ 00:02:57.210 END TEST even_2G_alloc 00:02:57.210 ************************************ 00:02:57.210 17:28:18 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:57.210 17:28:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.210 17:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.210 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:02:57.210 ************************************ 00:02:57.210 START TEST odd_alloc 00:02:57.210 ************************************ 00:02:57.210 17:28:18 -- common/autotest_common.sh@1104 -- # odd_alloc 00:02:57.210 17:28:18 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:57.210 17:28:18 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:57.210 17:28:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:57.210 17:28:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.210 17:28:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.210 17:28:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.210 17:28:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:57.210 17:28:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.210 17:28:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.210 17:28:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.210 17:28:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:57.210 17:28:18 -- setup/hugepages.sh@83 -- # : 513 00:02:57.210 17:28:18 -- setup/hugepages.sh@84 -- # : 1 00:02:57.210 17:28:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:57.210 17:28:18 -- setup/hugepages.sh@83 -- # : 0 00:02:57.210 17:28:18 -- setup/hugepages.sh@84 -- # : 0 00:02:57.210 17:28:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.210 17:28:18 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:57.210 17:28:18 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:57.210 17:28:18 -- setup/hugepages.sh@160 -- # setup output 00:02:57.210 17:28:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.210 17:28:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.507 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.507 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.507 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.507 17:28:21 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:00.507 17:28:21 -- setup/hugepages.sh@89 -- # local node 00:03:00.507 17:28:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.507 17:28:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.507 17:28:21 -- setup/hugepages.sh@92 -- # local surp 00:03:00.507 17:28:21 -- setup/hugepages.sh@93 -- # local resv 00:03:00.507 17:28:21 -- setup/hugepages.sh@94 -- # local anon 00:03:00.507 17:28:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.507 17:28:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.507 17:28:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.507 17:28:21 -- setup/common.sh@18 -- # local node= 00:03:00.507 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.507 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.507 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.507 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.507 17:28:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.507 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.507 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170575964 kB' 'MemAvailable: 173808880 kB' 'Buffers: 3896 kB' 'Cached: 14635260 kB' 'SwapCached: 0 kB' 'Active: 11497960 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080004 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556160 kB' 'Mapped: 216192 kB' 'Shmem: 10527128 kB' 'KReclaimable: 530204 kB' 'Slab: 1181036 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650832 kB' 'KernelStack: 20512 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12608172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.507 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.507 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.508 17:28:21 -- setup/common.sh@33 -- # echo 0 00:03:00.508 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.508 17:28:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:00.508 17:28:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.508 17:28:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.508 17:28:21 -- setup/common.sh@18 -- # local node= 00:03:00.508 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.508 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.508 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.508 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.508 17:28:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.508 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.508 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170578232 kB' 'MemAvailable: 173811148 kB' 'Buffers: 3896 kB' 'Cached: 14635264 kB' 'SwapCached: 0 kB' 'Active: 11497948 kB' 'Inactive: 3694072 kB' 'Active(anon): 11079992 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556148 kB' 'Mapped: 216192 kB' 'Shmem: 10527132 kB' 'KReclaimable: 530204 kB' 'Slab: 1181036 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650832 kB' 'KernelStack: 20480 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12608184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.508 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.508 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.509 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.509 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.510 17:28:21 -- setup/common.sh@33 -- # echo 0 00:03:00.510 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.510 17:28:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:00.510 17:28:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.510 17:28:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.510 17:28:21 -- setup/common.sh@18 -- # local node= 00:03:00.510 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.510 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.510 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.510 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.510 17:28:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.510 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.510 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.510 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170584172 kB' 'MemAvailable: 173817088 kB' 'Buffers: 3896 kB' 'Cached: 14635276 kB' 'SwapCached: 0 kB' 'Active: 11497376 kB' 'Inactive: 3694072 kB' 'Active(anon): 11079420 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555556 kB' 'Mapped: 216160 kB' 'Shmem: 10527144 kB' 'KReclaimable: 530204 kB' 'Slab: 1181028 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650824 kB' 'KernelStack: 20560 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12619404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.510 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.510 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.511 17:28:21 -- setup/common.sh@33 -- # echo 0 00:03:00.511 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.511 17:28:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:00.511 17:28:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:00.511 nr_hugepages=1025 00:03:00.511 17:28:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.511 resv_hugepages=0 00:03:00.511 17:28:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.511 surplus_hugepages=0 00:03:00.511 17:28:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.511 anon_hugepages=0 00:03:00.511 17:28:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.511 17:28:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:00.511 17:28:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.511 17:28:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.511 17:28:21 -- setup/common.sh@18 -- # local node= 00:03:00.511 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.511 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.511 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.511 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.511 17:28:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.511 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.511 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170584764 kB' 'MemAvailable: 173817680 kB' 'Buffers: 3896 kB' 'Cached: 14635300 kB' 'SwapCached: 0 kB' 'Active: 11496476 kB' 'Inactive: 3694072 kB' 'Active(anon): 11078520 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554616 kB' 'Mapped: 216160 kB' 'Shmem: 10527168 kB' 'KReclaimable: 530204 kB' 'Slab: 1181028 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650824 kB' 'KernelStack: 20496 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12607848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.511 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.511 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.512 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.512 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.513 17:28:21 -- setup/common.sh@33 -- # echo 1025 00:03:00.513 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.513 17:28:21 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.513 17:28:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.513 17:28:21 -- setup/hugepages.sh@27 -- # local node 00:03:00.513 17:28:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.513 17:28:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.513 17:28:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.513 17:28:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:00.513 17:28:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.513 17:28:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.513 17:28:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.513 17:28:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.513 17:28:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.513 17:28:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.513 17:28:21 -- setup/common.sh@18 -- # local node=0 00:03:00.513 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.513 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.513 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.513 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.513 17:28:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.513 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.513 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92695896 kB' 'MemUsed: 4919732 kB' 'SwapCached: 0 kB' 'Active: 2234068 kB' 'Inactive: 216956 kB' 'Active(anon): 2072244 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2289988 kB' 'Mapped: 64500 kB' 'AnonPages: 164220 kB' 'Shmem: 1911208 kB' 'KernelStack: 11384 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 651276 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 296300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.513 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.513 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@33 -- # echo 0 00:03:00.514 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.514 17:28:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.514 17:28:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.514 17:28:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.514 17:28:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.514 17:28:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.514 17:28:21 -- setup/common.sh@18 -- # local node=1 00:03:00.514 17:28:21 -- setup/common.sh@19 -- # local var val 00:03:00.514 17:28:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.514 17:28:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.514 17:28:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.514 17:28:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.514 17:28:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.514 17:28:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77888868 kB' 'MemUsed: 15876640 kB' 'SwapCached: 0 kB' 'Active: 9262772 kB' 'Inactive: 3477116 kB' 'Active(anon): 9006640 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477116 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12349220 kB' 'Mapped: 151660 kB' 'AnonPages: 390776 kB' 'Shmem: 8615972 kB' 'KernelStack: 9128 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175228 kB' 'Slab: 529752 kB' 'SReclaimable: 175228 kB' 'SUnreclaim: 354524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.514 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.514 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # continue 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.515 17:28:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.515 17:28:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.515 17:28:21 -- setup/common.sh@33 -- # echo 0 00:03:00.515 17:28:21 -- setup/common.sh@33 -- # return 0 00:03:00.515 17:28:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.515 17:28:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.515 17:28:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.515 17:28:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:00.515 node0=512 expecting 513 00:03:00.515 17:28:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.515 17:28:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.515 17:28:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.515 17:28:21 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:00.515 node1=513 expecting 512 00:03:00.515 17:28:21 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:00.515 00:03:00.515 real 0m2.965s 00:03:00.515 user 0m1.231s 00:03:00.515 sys 0m1.803s 00:03:00.515 17:28:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.515 17:28:21 -- common/autotest_common.sh@10 -- # set +x 00:03:00.515 ************************************ 00:03:00.515 END TEST odd_alloc 00:03:00.515 ************************************ 00:03:00.515 17:28:21 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:00.515 17:28:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.515 17:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.515 17:28:21 -- common/autotest_common.sh@10 -- # set +x 00:03:00.515 ************************************ 00:03:00.515 START TEST custom_alloc 00:03:00.515 ************************************ 00:03:00.515 17:28:21 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:00.515 17:28:21 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:00.515 17:28:21 -- setup/hugepages.sh@169 -- # local node 00:03:00.515 17:28:21 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:00.515 17:28:21 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:00.515 17:28:21 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:00.515 17:28:21 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:00.515 17:28:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.515 17:28:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.515 17:28:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.515 17:28:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.515 17:28:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.515 17:28:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.515 17:28:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.515 17:28:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.515 17:28:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.515 17:28:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.515 17:28:21 -- setup/hugepages.sh@83 -- # : 256 00:03:00.515 17:28:21 -- setup/hugepages.sh@84 -- # : 1 00:03:00.515 17:28:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.515 17:28:21 -- setup/hugepages.sh@83 -- # : 0 00:03:00.515 17:28:21 -- setup/hugepages.sh@84 -- # : 0 00:03:00.515 17:28:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:00.515 17:28:21 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:00.515 17:28:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.515 17:28:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.515 17:28:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.515 17:28:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.515 17:28:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.515 17:28:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.515 17:28:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.515 17:28:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.515 17:28:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.515 17:28:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.515 17:28:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.515 17:28:21 -- setup/hugepages.sh@78 -- # return 0 00:03:00.515 17:28:21 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:00.515 17:28:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.515 17:28:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.515 17:28:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.515 17:28:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.516 17:28:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.516 17:28:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.516 17:28:21 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:00.516 17:28:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.516 17:28:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.516 17:28:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.516 17:28:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.516 17:28:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.516 17:28:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.516 17:28:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.516 17:28:21 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:00.516 17:28:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.516 17:28:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.516 17:28:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.516 17:28:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:00.516 17:28:21 -- setup/hugepages.sh@78 -- # return 0 00:03:00.516 17:28:21 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:00.516 17:28:21 -- setup/hugepages.sh@187 -- # setup output 00:03:00.516 17:28:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.516 17:28:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.055 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.055 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.055 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.055 17:28:24 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:03.055 17:28:24 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:03.056 17:28:24 -- setup/hugepages.sh@89 -- # local node 00:03:03.056 17:28:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.056 17:28:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.056 17:28:24 -- setup/hugepages.sh@92 -- # local surp 00:03:03.056 17:28:24 -- setup/hugepages.sh@93 -- # local resv 00:03:03.056 17:28:24 -- setup/hugepages.sh@94 -- # local anon 00:03:03.056 17:28:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.056 17:28:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.056 17:28:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.056 17:28:24 -- setup/common.sh@18 -- # local node= 00:03:03.056 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.056 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.056 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.056 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.056 17:28:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.056 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.056 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169527344 kB' 'MemAvailable: 172760260 kB' 'Buffers: 3896 kB' 'Cached: 14635384 kB' 'SwapCached: 0 kB' 'Active: 11497972 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080016 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556100 kB' 'Mapped: 216232 kB' 'Shmem: 10527252 kB' 'KReclaimable: 530204 kB' 'Slab: 1180964 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650760 kB' 'KernelStack: 20544 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12608812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.056 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.056 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.057 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.057 17:28:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.320 17:28:24 -- setup/common.sh@33 -- # echo 0 00:03:03.320 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.320 17:28:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.320 17:28:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.320 17:28:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.320 17:28:24 -- setup/common.sh@18 -- # local node= 00:03:03.320 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.320 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.320 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.320 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.320 17:28:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.320 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.320 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169528336 kB' 'MemAvailable: 172761252 kB' 'Buffers: 3896 kB' 'Cached: 14635388 kB' 'SwapCached: 0 kB' 'Active: 11497664 kB' 'Inactive: 3694072 kB' 'Active(anon): 11079708 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555828 kB' 'Mapped: 216180 kB' 'Shmem: 10527256 kB' 'KReclaimable: 530204 kB' 'Slab: 1180964 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650760 kB' 'KernelStack: 20528 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12608824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.320 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.320 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.321 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.321 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.322 17:28:24 -- setup/common.sh@33 -- # echo 0 00:03:03.322 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.322 17:28:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:03.322 17:28:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.322 17:28:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.322 17:28:24 -- setup/common.sh@18 -- # local node= 00:03:03.322 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.322 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.322 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.322 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.322 17:28:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.322 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.322 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169527928 kB' 'MemAvailable: 172760844 kB' 'Buffers: 3896 kB' 'Cached: 14635400 kB' 'SwapCached: 0 kB' 'Active: 11497656 kB' 'Inactive: 3694072 kB' 'Active(anon): 11079700 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555808 kB' 'Mapped: 216180 kB' 'Shmem: 10527268 kB' 'KReclaimable: 530204 kB' 'Slab: 1180988 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650784 kB' 'KernelStack: 20528 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12608840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.322 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.322 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.323 17:28:24 -- setup/common.sh@33 -- # echo 0 00:03:03.323 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.323 17:28:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:03.323 17:28:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:03.323 nr_hugepages=1536 00:03:03.323 17:28:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.323 resv_hugepages=0 00:03:03.323 17:28:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.323 surplus_hugepages=0 00:03:03.323 17:28:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.323 anon_hugepages=0 00:03:03.323 17:28:24 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.323 17:28:24 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:03.323 17:28:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.323 17:28:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.323 17:28:24 -- setup/common.sh@18 -- # local node= 00:03:03.323 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.323 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.323 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.323 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.323 17:28:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.323 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.323 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.323 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.323 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169527172 kB' 'MemAvailable: 172760088 kB' 'Buffers: 3896 kB' 'Cached: 14635400 kB' 'SwapCached: 0 kB' 'Active: 11497656 kB' 'Inactive: 3694072 kB' 'Active(anon): 11079700 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555808 kB' 'Mapped: 216180 kB' 'Shmem: 10527268 kB' 'KReclaimable: 530204 kB' 'Slab: 1180988 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650784 kB' 'KernelStack: 20528 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12608852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:03.323 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.324 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.324 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.325 17:28:24 -- setup/common.sh@33 -- # echo 1536 00:03:03.325 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.325 17:28:24 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.325 17:28:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.325 17:28:24 -- setup/hugepages.sh@27 -- # local node 00:03:03.325 17:28:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.325 17:28:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.325 17:28:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.325 17:28:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.325 17:28:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.325 17:28:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.325 17:28:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.325 17:28:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.325 17:28:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.325 17:28:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.325 17:28:24 -- setup/common.sh@18 -- # local node=0 00:03:03.325 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.325 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.325 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.325 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.325 17:28:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.325 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.325 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92703452 kB' 'MemUsed: 4912176 kB' 'SwapCached: 0 kB' 'Active: 2234672 kB' 'Inactive: 216956 kB' 'Active(anon): 2072848 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2290072 kB' 'Mapped: 64508 kB' 'AnonPages: 164892 kB' 'Shmem: 1911292 kB' 'KernelStack: 11368 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 651420 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 296444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.325 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.325 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@33 -- # echo 0 00:03:03.326 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.326 17:28:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.326 17:28:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.326 17:28:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.326 17:28:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.326 17:28:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.326 17:28:24 -- setup/common.sh@18 -- # local node=1 00:03:03.326 17:28:24 -- setup/common.sh@19 -- # local var val 00:03:03.326 17:28:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.326 17:28:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.326 17:28:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.326 17:28:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.326 17:28:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.326 17:28:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.326 17:28:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76823720 kB' 'MemUsed: 16941788 kB' 'SwapCached: 0 kB' 'Active: 9263148 kB' 'Inactive: 3477116 kB' 'Active(anon): 9007016 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477116 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12349224 kB' 'Mapped: 151672 kB' 'AnonPages: 391080 kB' 'Shmem: 8615976 kB' 'KernelStack: 9144 kB' 'PageTables: 5136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175228 kB' 'Slab: 529568 kB' 'SReclaimable: 175228 kB' 'SUnreclaim: 354340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.326 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.326 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # continue 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.327 17:28:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.327 17:28:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.327 17:28:24 -- setup/common.sh@33 -- # echo 0 00:03:03.327 17:28:24 -- setup/common.sh@33 -- # return 0 00:03:03.327 17:28:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.327 17:28:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.327 17:28:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.327 17:28:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.327 17:28:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.327 node0=512 expecting 512 00:03:03.327 17:28:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.327 17:28:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.327 17:28:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.327 17:28:24 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:03.327 node1=1024 expecting 1024 00:03:03.327 17:28:24 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:03.327 00:03:03.327 real 0m3.022s 00:03:03.327 user 0m1.229s 00:03:03.327 sys 0m1.861s 00:03:03.327 17:28:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.327 17:28:24 -- common/autotest_common.sh@10 -- # set +x 00:03:03.327 ************************************ 00:03:03.327 END TEST custom_alloc 00:03:03.327 ************************************ 00:03:03.327 17:28:24 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:03.327 17:28:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:03.327 17:28:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.327 17:28:24 -- common/autotest_common.sh@10 -- # set +x 00:03:03.327 ************************************ 00:03:03.327 START TEST no_shrink_alloc 00:03:03.327 ************************************ 00:03:03.327 17:28:24 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:03.327 17:28:24 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:03.327 17:28:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.327 17:28:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.327 17:28:24 -- setup/hugepages.sh@51 -- # shift 00:03:03.327 17:28:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.327 17:28:24 -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.327 17:28:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.327 17:28:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.327 17:28:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.327 17:28:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.327 17:28:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.327 17:28:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.327 17:28:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.327 17:28:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.327 17:28:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.327 17:28:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.327 17:28:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.327 17:28:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.327 17:28:24 -- setup/hugepages.sh@73 -- # return 0 00:03:03.327 17:28:24 -- setup/hugepages.sh@198 -- # setup output 00:03:03.327 17:28:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.327 17:28:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.868 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.868 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.868 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.868 17:28:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:05.868 17:28:27 -- setup/hugepages.sh@89 -- # local node 00:03:05.868 17:28:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.868 17:28:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.868 17:28:27 -- setup/hugepages.sh@92 -- # local surp 00:03:05.868 17:28:27 -- setup/hugepages.sh@93 -- # local resv 00:03:05.868 17:28:27 -- setup/hugepages.sh@94 -- # local anon 00:03:05.868 17:28:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.868 17:28:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.868 17:28:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.868 17:28:27 -- setup/common.sh@18 -- # local node= 00:03:05.868 17:28:27 -- setup/common.sh@19 -- # local var val 00:03:05.868 17:28:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.868 17:28:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.868 17:28:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.868 17:28:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.868 17:28:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.868 17:28:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170584628 kB' 'MemAvailable: 173818048 kB' 'Buffers: 3896 kB' 'Cached: 14635500 kB' 'SwapCached: 0 kB' 'Active: 11498148 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080192 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556028 kB' 'Mapped: 216196 kB' 'Shmem: 10527368 kB' 'KReclaimable: 530204 kB' 'Slab: 1180948 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650744 kB' 'KernelStack: 20496 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12608816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.868 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.868 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.869 17:28:27 -- setup/common.sh@33 -- # echo 0 00:03:05.869 17:28:27 -- setup/common.sh@33 -- # return 0 00:03:05.869 17:28:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.869 17:28:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.869 17:28:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.869 17:28:27 -- setup/common.sh@18 -- # local node= 00:03:05.869 17:28:27 -- setup/common.sh@19 -- # local var val 00:03:05.869 17:28:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.869 17:28:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.869 17:28:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.869 17:28:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.869 17:28:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.869 17:28:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170587112 kB' 'MemAvailable: 173820028 kB' 'Buffers: 3896 kB' 'Cached: 14635508 kB' 'SwapCached: 0 kB' 'Active: 11498144 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080188 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556032 kB' 'Mapped: 216180 kB' 'Shmem: 10527376 kB' 'KReclaimable: 530204 kB' 'Slab: 1180996 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650792 kB' 'KernelStack: 20512 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12608832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.869 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.869 17:28:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.870 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.870 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.870 17:28:27 -- setup/common.sh@33 -- # echo 0 00:03:05.870 17:28:27 -- setup/common.sh@33 -- # return 0 00:03:05.870 17:28:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.870 17:28:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.870 17:28:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.870 17:28:27 -- setup/common.sh@18 -- # local node= 00:03:05.871 17:28:27 -- setup/common.sh@19 -- # local var val 00:03:05.871 17:28:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.871 17:28:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.871 17:28:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.871 17:28:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.871 17:28:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.871 17:28:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170587112 kB' 'MemAvailable: 173820028 kB' 'Buffers: 3896 kB' 'Cached: 14635524 kB' 'SwapCached: 0 kB' 'Active: 11498000 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080044 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555856 kB' 'Mapped: 216180 kB' 'Shmem: 10527392 kB' 'KReclaimable: 530204 kB' 'Slab: 1180996 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650792 kB' 'KernelStack: 20496 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12608984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.871 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.871 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.872 17:28:27 -- setup/common.sh@33 -- # echo 0 00:03:05.872 17:28:27 -- setup/common.sh@33 -- # return 0 00:03:05.872 17:28:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.872 17:28:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.872 nr_hugepages=1024 00:03:05.872 17:28:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.872 resv_hugepages=0 00:03:05.872 17:28:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.872 surplus_hugepages=0 00:03:05.872 17:28:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.872 anon_hugepages=0 00:03:05.872 17:28:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.872 17:28:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.872 17:28:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.872 17:28:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.872 17:28:27 -- setup/common.sh@18 -- # local node= 00:03:05.872 17:28:27 -- setup/common.sh@19 -- # local var val 00:03:05.872 17:28:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.872 17:28:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.872 17:28:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.872 17:28:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.872 17:28:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.872 17:28:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170587840 kB' 'MemAvailable: 173820756 kB' 'Buffers: 3896 kB' 'Cached: 14635540 kB' 'SwapCached: 0 kB' 'Active: 11498364 kB' 'Inactive: 3694072 kB' 'Active(anon): 11080408 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556276 kB' 'Mapped: 216180 kB' 'Shmem: 10527408 kB' 'KReclaimable: 530204 kB' 'Slab: 1180996 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 650792 kB' 'KernelStack: 20528 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12609368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.872 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.872 17:28:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.873 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.873 17:28:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.874 17:28:27 -- setup/common.sh@33 -- # echo 1024 00:03:05.874 17:28:27 -- setup/common.sh@33 -- # return 0 00:03:05.874 17:28:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.874 17:28:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.874 17:28:27 -- setup/hugepages.sh@27 -- # local node 00:03:05.874 17:28:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.874 17:28:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.874 17:28:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.874 17:28:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.874 17:28:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.874 17:28:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.874 17:28:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.874 17:28:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.874 17:28:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.874 17:28:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.874 17:28:27 -- setup/common.sh@18 -- # local node=0 00:03:05.874 17:28:27 -- setup/common.sh@19 -- # local var val 00:03:05.874 17:28:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.874 17:28:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.874 17:28:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.874 17:28:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.874 17:28:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.874 17:28:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91656384 kB' 'MemUsed: 5959244 kB' 'SwapCached: 0 kB' 'Active: 2234444 kB' 'Inactive: 216956 kB' 'Active(anon): 2072620 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2290212 kB' 'Mapped: 64520 kB' 'AnonPages: 164300 kB' 'Shmem: 1911432 kB' 'KernelStack: 11352 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 651320 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 296344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.874 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.874 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # continue 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.134 17:28:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.134 17:28:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.134 17:28:27 -- setup/common.sh@33 -- # echo 0 00:03:06.134 17:28:27 -- setup/common.sh@33 -- # return 0 00:03:06.134 17:28:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.134 17:28:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.134 17:28:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.134 17:28:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.134 17:28:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.134 node0=1024 expecting 1024 00:03:06.134 17:28:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.134 17:28:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:06.134 17:28:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:06.134 17:28:27 -- setup/hugepages.sh@202 -- # setup output 00:03:06.134 17:28:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.134 17:28:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.676 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.676 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.676 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.676 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:08.676 17:28:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:08.676 17:28:30 -- setup/hugepages.sh@89 -- # local node 00:03:08.676 17:28:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.676 17:28:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.676 17:28:30 -- setup/hugepages.sh@92 -- # local surp 00:03:08.676 17:28:30 -- setup/hugepages.sh@93 -- # local resv 00:03:08.676 17:28:30 -- setup/hugepages.sh@94 -- # local anon 00:03:08.676 17:28:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.676 17:28:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.676 17:28:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.676 17:28:30 -- setup/common.sh@18 -- # local node= 00:03:08.676 17:28:30 -- setup/common.sh@19 -- # local var val 00:03:08.676 17:28:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.676 17:28:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.676 17:28:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.676 17:28:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.676 17:28:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.676 17:28:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170590192 kB' 'MemAvailable: 173823108 kB' 'Buffers: 3896 kB' 'Cached: 14635612 kB' 'SwapCached: 0 kB' 'Active: 11500936 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082980 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558276 kB' 'Mapped: 216296 kB' 'Shmem: 10527480 kB' 'KReclaimable: 530204 kB' 'Slab: 1181472 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 651268 kB' 'KernelStack: 20464 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12609688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.676 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.676 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.677 17:28:30 -- setup/common.sh@33 -- # echo 0 00:03:08.677 17:28:30 -- setup/common.sh@33 -- # return 0 00:03:08.677 17:28:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.677 17:28:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.677 17:28:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.677 17:28:30 -- setup/common.sh@18 -- # local node= 00:03:08.677 17:28:30 -- setup/common.sh@19 -- # local var val 00:03:08.677 17:28:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.677 17:28:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.677 17:28:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.677 17:28:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.677 17:28:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.677 17:28:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170591988 kB' 'MemAvailable: 173824904 kB' 'Buffers: 3896 kB' 'Cached: 14635616 kB' 'SwapCached: 0 kB' 'Active: 11500056 kB' 'Inactive: 3694072 kB' 'Active(anon): 11082100 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557448 kB' 'Mapped: 216256 kB' 'Shmem: 10527484 kB' 'KReclaimable: 530204 kB' 'Slab: 1181464 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 651260 kB' 'KernelStack: 20528 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12609700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.677 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.677 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.678 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.678 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.678 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.678 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.678 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.678 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.939 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.939 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.940 17:28:30 -- setup/common.sh@33 -- # echo 0 00:03:08.940 17:28:30 -- setup/common.sh@33 -- # return 0 00:03:08.940 17:28:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.940 17:28:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.940 17:28:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.940 17:28:30 -- setup/common.sh@18 -- # local node= 00:03:08.940 17:28:30 -- setup/common.sh@19 -- # local var val 00:03:08.940 17:28:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.940 17:28:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.940 17:28:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.940 17:28:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.940 17:28:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.940 17:28:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170592980 kB' 'MemAvailable: 173825896 kB' 'Buffers: 3896 kB' 'Cached: 14635628 kB' 'SwapCached: 0 kB' 'Active: 11499584 kB' 'Inactive: 3694072 kB' 'Active(anon): 11081628 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557436 kB' 'Mapped: 216180 kB' 'Shmem: 10527496 kB' 'KReclaimable: 530204 kB' 'Slab: 1181444 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 651240 kB' 'KernelStack: 20528 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12609716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.940 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.940 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.941 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.941 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.942 17:28:30 -- setup/common.sh@33 -- # echo 0 00:03:08.942 17:28:30 -- setup/common.sh@33 -- # return 0 00:03:08.942 17:28:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.942 17:28:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.942 nr_hugepages=1024 00:03:08.942 17:28:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.942 resv_hugepages=0 00:03:08.942 17:28:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.942 surplus_hugepages=0 00:03:08.942 17:28:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.942 anon_hugepages=0 00:03:08.942 17:28:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.942 17:28:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.942 17:28:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.942 17:28:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.942 17:28:30 -- setup/common.sh@18 -- # local node= 00:03:08.942 17:28:30 -- setup/common.sh@19 -- # local var val 00:03:08.942 17:28:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.942 17:28:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.942 17:28:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.942 17:28:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.942 17:28:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.942 17:28:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.942 17:28:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170592664 kB' 'MemAvailable: 173825580 kB' 'Buffers: 3896 kB' 'Cached: 14635640 kB' 'SwapCached: 0 kB' 'Active: 11499596 kB' 'Inactive: 3694072 kB' 'Active(anon): 11081640 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557436 kB' 'Mapped: 216180 kB' 'Shmem: 10527508 kB' 'KReclaimable: 530204 kB' 'Slab: 1181444 kB' 'SReclaimable: 530204 kB' 'SUnreclaim: 651240 kB' 'KernelStack: 20528 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12609732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317112 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3838932 kB' 'DirectMap2M: 33589248 kB' 'DirectMap1G: 164626432 kB' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.942 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.942 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.943 17:28:30 -- setup/common.sh@33 -- # echo 1024 00:03:08.943 17:28:30 -- setup/common.sh@33 -- # return 0 00:03:08.943 17:28:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.943 17:28:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.943 17:28:30 -- setup/hugepages.sh@27 -- # local node 00:03:08.943 17:28:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.943 17:28:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.943 17:28:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.943 17:28:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.943 17:28:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.943 17:28:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.943 17:28:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.943 17:28:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.943 17:28:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.943 17:28:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.943 17:28:30 -- setup/common.sh@18 -- # local node=0 00:03:08.943 17:28:30 -- setup/common.sh@19 -- # local var val 00:03:08.943 17:28:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.943 17:28:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.943 17:28:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.943 17:28:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.943 17:28:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.943 17:28:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91658296 kB' 'MemUsed: 5957332 kB' 'SwapCached: 0 kB' 'Active: 2235512 kB' 'Inactive: 216956 kB' 'Active(anon): 2073688 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2290288 kB' 'Mapped: 64524 kB' 'AnonPages: 165340 kB' 'Shmem: 1911508 kB' 'KernelStack: 11384 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354976 kB' 'Slab: 651720 kB' 'SReclaimable: 354976 kB' 'SUnreclaim: 296744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.943 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.943 17:28:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # continue 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.944 17:28:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.944 17:28:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.944 17:28:30 -- setup/common.sh@33 -- # echo 0 00:03:08.944 17:28:30 -- setup/common.sh@33 -- # return 0 00:03:08.944 17:28:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.944 17:28:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.944 17:28:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.944 17:28:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.945 17:28:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.945 node0=1024 expecting 1024 00:03:08.945 17:28:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.945 00:03:08.945 real 0m5.551s 00:03:08.945 user 0m2.172s 00:03:08.945 sys 0m3.406s 00:03:08.945 17:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.945 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:08.945 ************************************ 00:03:08.945 END TEST no_shrink_alloc 00:03:08.945 ************************************ 00:03:08.945 17:28:30 -- setup/hugepages.sh@217 -- # clear_hp 00:03:08.945 17:28:30 -- setup/hugepages.sh@37 -- # local node hp 00:03:08.945 17:28:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.945 17:28:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.945 17:28:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.945 17:28:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.945 17:28:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.945 17:28:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.945 17:28:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.945 17:28:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.945 17:28:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.945 17:28:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.945 17:28:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.945 17:28:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.945 00:03:08.945 real 0m21.704s 00:03:08.945 user 0m8.368s 00:03:08.945 sys 0m12.865s 00:03:08.945 17:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.945 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:08.945 ************************************ 00:03:08.945 END TEST hugepages 00:03:08.945 ************************************ 00:03:08.945 17:28:30 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:08.945 17:28:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.945 17:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.945 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:08.945 ************************************ 00:03:08.945 START TEST driver 00:03:08.945 ************************************ 00:03:08.945 17:28:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:09.204 * Looking for test storage... 00:03:09.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.204 17:28:30 -- setup/driver.sh@68 -- # setup reset 00:03:09.204 17:28:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.204 17:28:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.399 17:28:34 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:13.399 17:28:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.399 17:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.399 17:28:34 -- common/autotest_common.sh@10 -- # set +x 00:03:13.399 ************************************ 00:03:13.399 START TEST guess_driver 00:03:13.399 ************************************ 00:03:13.399 17:28:34 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:13.399 17:28:34 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:13.399 17:28:34 -- setup/driver.sh@47 -- # local fail=0 00:03:13.399 17:28:34 -- setup/driver.sh@49 -- # pick_driver 00:03:13.399 17:28:34 -- setup/driver.sh@36 -- # vfio 00:03:13.399 17:28:34 -- setup/driver.sh@21 -- # local iommu_grups 00:03:13.399 17:28:34 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:13.399 17:28:34 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:13.399 17:28:34 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:13.399 17:28:34 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:13.399 17:28:34 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:13.399 17:28:34 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:13.399 17:28:34 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:13.399 17:28:34 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:13.399 17:28:34 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:13.399 17:28:34 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:13.399 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:13.399 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:13.399 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:13.399 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:13.400 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:13.400 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:13.400 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:13.400 17:28:34 -- setup/driver.sh@30 -- # return 0 00:03:13.400 17:28:34 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:13.400 17:28:34 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:13.400 17:28:34 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:13.400 17:28:34 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:13.400 Looking for driver=vfio-pci 00:03:13.400 17:28:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.400 17:28:34 -- setup/driver.sh@45 -- # setup output config 00:03:13.400 17:28:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.400 17:28:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.939 17:28:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.939 17:28:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:15.939 17:28:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.508 17:28:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.508 17:28:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.508 17:28:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.767 17:28:38 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:16.767 17:28:38 -- setup/driver.sh@65 -- # setup reset 00:03:16.767 17:28:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.767 17:28:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.007 00:03:21.007 real 0m7.580s 00:03:21.007 user 0m2.075s 00:03:21.007 sys 0m3.959s 00:03:21.007 17:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.007 17:28:42 -- common/autotest_common.sh@10 -- # set +x 00:03:21.007 ************************************ 00:03:21.007 END TEST guess_driver 00:03:21.007 ************************************ 00:03:21.007 00:03:21.007 real 0m11.604s 00:03:21.007 user 0m3.277s 00:03:21.007 sys 0m6.086s 00:03:21.007 17:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.007 17:28:42 -- common/autotest_common.sh@10 -- # set +x 00:03:21.007 ************************************ 00:03:21.007 END TEST driver 00:03:21.007 ************************************ 00:03:21.007 17:28:42 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:21.007 17:28:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.007 17:28:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.007 17:28:42 -- common/autotest_common.sh@10 -- # set +x 00:03:21.007 ************************************ 00:03:21.007 START TEST devices 00:03:21.007 ************************************ 00:03:21.007 17:28:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:21.007 * Looking for test storage... 00:03:21.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.007 17:28:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:21.007 17:28:42 -- setup/devices.sh@192 -- # setup reset 00:03:21.007 17:28:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.007 17:28:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.545 17:28:44 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:23.545 17:28:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:23.545 17:28:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:23.545 17:28:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:23.545 17:28:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:23.545 17:28:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:23.545 17:28:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:23.545 17:28:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.545 17:28:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:23.545 17:28:44 -- setup/devices.sh@196 -- # blocks=() 00:03:23.545 17:28:44 -- setup/devices.sh@196 -- # declare -a blocks 00:03:23.545 17:28:44 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:23.545 17:28:44 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:23.545 17:28:44 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:23.545 17:28:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:23.545 17:28:44 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:23.545 17:28:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:23.545 17:28:44 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:23.545 17:28:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:23.545 17:28:44 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:23.545 17:28:44 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:23.545 17:28:44 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:23.545 No valid GPT data, bailing 00:03:23.545 17:28:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.545 17:28:44 -- scripts/common.sh@393 -- # pt= 00:03:23.545 17:28:44 -- scripts/common.sh@394 -- # return 1 00:03:23.545 17:28:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:23.545 17:28:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:23.545 17:28:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:23.545 17:28:44 -- setup/common.sh@80 -- # echo 1000204886016 00:03:23.545 17:28:44 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:23.545 17:28:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:23.545 17:28:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:23.545 17:28:44 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:23.545 17:28:44 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:23.545 17:28:44 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:23.545 17:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.545 17:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.545 17:28:44 -- common/autotest_common.sh@10 -- # set +x 00:03:23.545 ************************************ 00:03:23.545 START TEST nvme_mount 00:03:23.545 ************************************ 00:03:23.545 17:28:44 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:23.545 17:28:44 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:23.545 17:28:44 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:23.545 17:28:44 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.545 17:28:44 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.545 17:28:44 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:23.545 17:28:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:23.545 17:28:44 -- setup/common.sh@40 -- # local part_no=1 00:03:23.545 17:28:44 -- setup/common.sh@41 -- # local size=1073741824 00:03:23.545 17:28:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:23.545 17:28:44 -- setup/common.sh@44 -- # parts=() 00:03:23.545 17:28:44 -- setup/common.sh@44 -- # local parts 00:03:23.545 17:28:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:23.545 17:28:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:23.545 17:28:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:23.545 17:28:44 -- setup/common.sh@46 -- # (( part++ )) 00:03:23.545 17:28:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:23.545 17:28:44 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:23.545 17:28:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:23.545 17:28:44 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:24.484 Creating new GPT entries in memory. 00:03:24.484 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:24.484 other utilities. 00:03:24.484 17:28:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:24.484 17:28:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.484 17:28:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.484 17:28:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.484 17:28:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:25.421 Creating new GPT entries in memory. 00:03:25.421 The operation has completed successfully. 00:03:25.421 17:28:46 -- setup/common.sh@57 -- # (( part++ )) 00:03:25.421 17:28:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.421 17:28:46 -- setup/common.sh@62 -- # wait 409003 00:03:25.421 17:28:46 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.421 17:28:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:25.421 17:28:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.421 17:28:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:25.421 17:28:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:25.421 17:28:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.421 17:28:46 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.421 17:28:46 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:25.421 17:28:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:25.421 17:28:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.421 17:28:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.421 17:28:46 -- setup/devices.sh@53 -- # local found=0 00:03:25.421 17:28:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:25.421 17:28:46 -- setup/devices.sh@56 -- # : 00:03:25.421 17:28:46 -- setup/devices.sh@59 -- # local pci status 00:03:25.421 17:28:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.421 17:28:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:25.421 17:28:46 -- setup/devices.sh@47 -- # setup output config 00:03:25.421 17:28:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.421 17:28:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:27.960 17:28:49 -- setup/devices.sh@63 -- # found=1 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.960 17:28:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.960 17:28:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:27.960 17:28:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.960 17:28:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:27.960 17:28:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.960 17:28:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:27.960 17:28:49 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.960 17:28:49 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.960 17:28:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:27.960 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.960 17:28:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.960 17:28:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.960 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:27.960 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:27.960 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:27.960 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:27.960 17:28:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:27.960 17:28:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:27.960 17:28:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.960 17:28:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:27.960 17:28:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:27.960 17:28:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.220 17:28:49 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.220 17:28:49 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:28.220 17:28:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:28.220 17:28:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.220 17:28:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.220 17:28:49 -- setup/devices.sh@53 -- # local found=0 00:03:28.220 17:28:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.220 17:28:49 -- setup/devices.sh@56 -- # : 00:03:28.220 17:28:49 -- setup/devices.sh@59 -- # local pci status 00:03:28.220 17:28:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.220 17:28:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:28.220 17:28:49 -- setup/devices.sh@47 -- # setup output config 00:03:28.220 17:28:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.220 17:28:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:30.759 17:28:51 -- setup/devices.sh@63 -- # found=1 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:30.759 17:28:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:30.759 17:28:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.759 17:28:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.759 17:28:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.759 17:28:52 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.759 17:28:52 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:30.759 17:28:52 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:30.759 17:28:52 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:30.759 17:28:52 -- setup/devices.sh@50 -- # local mount_point= 00:03:30.759 17:28:52 -- setup/devices.sh@51 -- # local test_file= 00:03:30.759 17:28:52 -- setup/devices.sh@53 -- # local found=0 00:03:30.759 17:28:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:30.759 17:28:52 -- setup/devices.sh@59 -- # local pci status 00:03:30.759 17:28:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.759 17:28:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:30.759 17:28:52 -- setup/devices.sh@47 -- # setup output config 00:03:30.759 17:28:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.759 17:28:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:33.297 17:28:54 -- setup/devices.sh@63 -- # found=1 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.297 17:28:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.297 17:28:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:33.297 17:28:54 -- setup/devices.sh@68 -- # return 0 00:03:33.297 17:28:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:33.297 17:28:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.297 17:28:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.297 17:28:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.297 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.297 00:03:33.297 real 0m9.834s 00:03:33.297 user 0m2.649s 00:03:33.297 sys 0m4.736s 00:03:33.297 17:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.297 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:03:33.297 ************************************ 00:03:33.297 END TEST nvme_mount 00:03:33.297 ************************************ 00:03:33.297 17:28:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:33.297 17:28:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.297 17:28:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.297 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:03:33.297 ************************************ 00:03:33.297 START TEST dm_mount 00:03:33.297 ************************************ 00:03:33.297 17:28:54 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:33.297 17:28:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:33.297 17:28:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:33.297 17:28:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:33.297 17:28:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:33.297 17:28:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:33.297 17:28:54 -- setup/common.sh@40 -- # local part_no=2 00:03:33.297 17:28:54 -- setup/common.sh@41 -- # local size=1073741824 00:03:33.297 17:28:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:33.297 17:28:54 -- setup/common.sh@44 -- # parts=() 00:03:33.297 17:28:54 -- setup/common.sh@44 -- # local parts 00:03:33.297 17:28:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:33.297 17:28:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.298 17:28:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.298 17:28:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.298 17:28:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.298 17:28:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.298 17:28:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.298 17:28:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.298 17:28:54 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:33.298 17:28:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:33.298 17:28:54 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:34.236 Creating new GPT entries in memory. 00:03:34.236 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.236 other utilities. 00:03:34.236 17:28:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.236 17:28:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.236 17:28:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.236 17:28:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.236 17:28:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:35.174 Creating new GPT entries in memory. 00:03:35.174 The operation has completed successfully. 00:03:35.174 17:28:56 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.174 17:28:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.174 17:28:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.174 17:28:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.174 17:28:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:36.554 The operation has completed successfully. 00:03:36.554 17:28:57 -- setup/common.sh@57 -- # (( part++ )) 00:03:36.554 17:28:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.554 17:28:57 -- setup/common.sh@62 -- # wait 413036 00:03:36.554 17:28:57 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:36.554 17:28:57 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.554 17:28:57 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.554 17:28:57 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:36.554 17:28:57 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:36.554 17:28:57 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.554 17:28:57 -- setup/devices.sh@161 -- # break 00:03:36.554 17:28:57 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.554 17:28:57 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:36.554 17:28:57 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:36.554 17:28:57 -- setup/devices.sh@166 -- # dm=dm-2 00:03:36.554 17:28:57 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:36.554 17:28:57 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:36.554 17:28:57 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.554 17:28:57 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:36.554 17:28:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.554 17:28:57 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.554 17:28:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:36.554 17:28:57 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.554 17:28:57 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.554 17:28:57 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:36.554 17:28:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:36.554 17:28:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.554 17:28:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.554 17:28:57 -- setup/devices.sh@53 -- # local found=0 00:03:36.554 17:28:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:36.554 17:28:57 -- setup/devices.sh@56 -- # : 00:03:36.554 17:28:57 -- setup/devices.sh@59 -- # local pci status 00:03:36.554 17:28:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.554 17:28:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:36.554 17:28:57 -- setup/devices.sh@47 -- # setup output config 00:03:36.554 17:28:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.554 17:28:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:38.464 17:28:59 -- setup/devices.sh@63 -- # found=1 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:28:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:28:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.464 17:29:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.464 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.725 17:29:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.725 17:29:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:38.725 17:29:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.725 17:29:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.725 17:29:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:38.725 17:29:00 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.725 17:29:00 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:38.725 17:29:00 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:38.725 17:29:00 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:38.725 17:29:00 -- setup/devices.sh@50 -- # local mount_point= 00:03:38.725 17:29:00 -- setup/devices.sh@51 -- # local test_file= 00:03:38.725 17:29:00 -- setup/devices.sh@53 -- # local found=0 00:03:38.725 17:29:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.725 17:29:00 -- setup/devices.sh@59 -- # local pci status 00:03:38.725 17:29:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.725 17:29:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:38.725 17:29:00 -- setup/devices.sh@47 -- # setup output config 00:03:38.725 17:29:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.725 17:29:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:41.309 17:29:02 -- setup/devices.sh@63 -- # found=1 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.309 17:29:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.309 17:29:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.569 17:29:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.569 17:29:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:41.569 17:29:02 -- setup/devices.sh@68 -- # return 0 00:03:41.569 17:29:02 -- setup/devices.sh@187 -- # cleanup_dm 00:03:41.569 17:29:02 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.569 17:29:02 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.569 17:29:02 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:41.569 17:29:02 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.569 17:29:02 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:41.569 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.569 17:29:02 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.569 17:29:02 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:41.569 00:03:41.569 real 0m8.308s 00:03:41.569 user 0m1.880s 00:03:41.569 sys 0m3.420s 00:03:41.569 17:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.569 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:03:41.569 ************************************ 00:03:41.569 END TEST dm_mount 00:03:41.569 ************************************ 00:03:41.569 17:29:03 -- setup/devices.sh@1 -- # cleanup 00:03:41.569 17:29:03 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:41.569 17:29:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.569 17:29:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.569 17:29:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:41.569 17:29:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.569 17:29:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:41.829 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:41.829 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:41.829 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:41.829 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:41.829 17:29:03 -- setup/devices.sh@12 -- # cleanup_dm 00:03:41.829 17:29:03 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.829 17:29:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:41.829 17:29:03 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.829 17:29:03 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:41.829 17:29:03 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.829 17:29:03 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:41.829 00:03:41.829 real 0m21.210s 00:03:41.829 user 0m5.505s 00:03:41.829 sys 0m10.037s 00:03:41.829 17:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.829 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:03:41.829 ************************************ 00:03:41.829 END TEST devices 00:03:41.829 ************************************ 00:03:41.829 00:03:41.829 real 1m13.808s 00:03:41.829 user 0m23.618s 00:03:41.829 sys 0m40.454s 00:03:41.829 17:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.829 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:03:41.829 ************************************ 00:03:41.829 END TEST setup.sh 00:03:41.829 ************************************ 00:03:41.829 17:29:03 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.123 Hugepages 00:03:45.123 node hugesize free / total 00:03:45.123 node0 1048576kB 0 / 0 00:03:45.123 node0 2048kB 2048 / 2048 00:03:45.123 node1 1048576kB 0 / 0 00:03:45.123 node1 2048kB 0 / 0 00:03:45.123 00:03:45.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.123 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:45.123 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:45.123 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:45.123 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:45.123 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:45.123 17:29:06 -- spdk/autotest.sh@141 -- # uname -s 00:03:45.123 17:29:06 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:45.123 17:29:06 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:45.123 17:29:06 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.664 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.664 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.603 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.603 17:29:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:49.542 17:29:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:49.542 17:29:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:49.542 17:29:10 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.542 17:29:10 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:49.542 17:29:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:49.542 17:29:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:49.542 17:29:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.542 17:29:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.542 17:29:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:49.542 17:29:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:49.542 17:29:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:49.542 17:29:11 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.835 Waiting for block devices as requested 00:03:52.835 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:52.835 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:52.835 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:53.095 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:53.095 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:53.095 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:53.354 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:53.354 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:53.354 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:53.354 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:53.354 17:29:14 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:53.614 17:29:14 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:53.614 17:29:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:53.614 17:29:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:53.614 17:29:14 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:53.614 17:29:14 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:53.614 17:29:14 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:03:53.614 17:29:14 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:53.614 17:29:14 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:53.614 17:29:14 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:53.614 17:29:14 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:53.614 17:29:14 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:53.614 17:29:14 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:53.614 17:29:14 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:53.614 17:29:14 -- common/autotest_common.sh@1542 -- # continue 00:03:53.614 17:29:14 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:53.614 17:29:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.614 17:29:14 -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 17:29:15 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:53.614 17:29:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.614 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 17:29:15 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.296 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.296 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:57.237 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.237 17:29:18 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:57.237 17:29:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:57.237 17:29:18 -- common/autotest_common.sh@10 -- # set +x 00:03:57.237 17:29:18 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:57.237 17:29:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:57.237 17:29:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:57.237 17:29:18 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:57.237 17:29:18 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:57.237 17:29:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:57.237 17:29:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:57.237 17:29:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:57.237 17:29:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.237 17:29:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:57.237 17:29:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:57.237 17:29:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:57.237 17:29:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:57.237 17:29:18 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:57.237 17:29:18 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:57.237 17:29:18 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:57.237 17:29:18 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:57.237 17:29:18 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:57.237 17:29:18 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:57.237 17:29:18 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:57.237 17:29:18 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=422149 00:03:57.237 17:29:18 -- common/autotest_common.sh@1583 -- # waitforlisten 422149 00:03:57.237 17:29:18 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.237 17:29:18 -- common/autotest_common.sh@819 -- # '[' -z 422149 ']' 00:03:57.237 17:29:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.237 17:29:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:57.237 17:29:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.237 17:29:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:57.237 17:29:18 -- common/autotest_common.sh@10 -- # set +x 00:03:57.497 [2024-07-24 17:29:18.843544] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:03:57.497 [2024-07-24 17:29:18.843591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422149 ] 00:03:57.497 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.497 [2024-07-24 17:29:18.896476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.497 [2024-07-24 17:29:18.976158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:57.497 [2024-07-24 17:29:18.976272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.067 17:29:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:58.067 17:29:19 -- common/autotest_common.sh@852 -- # return 0 00:03:58.067 17:29:19 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:58.067 17:29:19 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:58.067 17:29:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:01.362 nvme0n1 00:04:01.362 17:29:22 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:01.362 [2024-07-24 17:29:22.770979] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:01.362 request: 00:04:01.362 { 00:04:01.362 "nvme_ctrlr_name": "nvme0", 00:04:01.362 "password": "test", 00:04:01.362 "method": "bdev_nvme_opal_revert", 00:04:01.362 "req_id": 1 00:04:01.362 } 00:04:01.362 Got JSON-RPC error response 00:04:01.362 response: 00:04:01.362 { 00:04:01.362 "code": -32602, 00:04:01.362 "message": "Invalid parameters" 00:04:01.362 } 00:04:01.362 17:29:22 -- common/autotest_common.sh@1589 -- # true 00:04:01.362 17:29:22 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:01.362 17:29:22 -- common/autotest_common.sh@1593 -- # killprocess 422149 00:04:01.362 17:29:22 -- common/autotest_common.sh@926 -- # '[' -z 422149 ']' 00:04:01.362 17:29:22 -- common/autotest_common.sh@930 -- # kill -0 422149 00:04:01.362 17:29:22 -- common/autotest_common.sh@931 -- # uname 00:04:01.362 17:29:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:01.362 17:29:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 422149 00:04:01.362 17:29:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:01.362 17:29:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:01.362 17:29:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 422149' 00:04:01.362 killing process with pid 422149 00:04:01.362 17:29:22 -- common/autotest_common.sh@945 -- # kill 422149 00:04:01.362 17:29:22 -- common/autotest_common.sh@950 -- # wait 422149 00:04:03.316 17:29:24 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:03.316 17:29:24 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:03.316 17:29:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:03.316 17:29:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:03.316 17:29:24 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:03.316 17:29:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:03.316 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 17:29:24 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.316 17:29:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.316 17:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.316 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 ************************************ 00:04:03.316 START TEST env 00:04:03.316 ************************************ 00:04:03.316 17:29:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:03.316 * Looking for test storage... 00:04:03.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:03.316 17:29:24 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.316 17:29:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.316 17:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.316 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 ************************************ 00:04:03.316 START TEST env_memory 00:04:03.316 ************************************ 00:04:03.316 17:29:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.316 00:04:03.316 00:04:03.316 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.316 http://cunit.sourceforge.net/ 00:04:03.316 00:04:03.316 00:04:03.316 Suite: memory 00:04:03.316 Test: alloc and free memory map ...[2024-07-24 17:29:24.604068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.316 passed 00:04:03.316 Test: mem map translation ...[2024-07-24 17:29:24.622317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.316 [2024-07-24 17:29:24.622334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.316 [2024-07-24 17:29:24.622369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.316 [2024-07-24 17:29:24.622375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.316 passed 00:04:03.316 Test: mem map registration ...[2024-07-24 17:29:24.659209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:03.316 [2024-07-24 17:29:24.659223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:03.316 passed 00:04:03.316 Test: mem map adjacent registrations ...passed 00:04:03.316 00:04:03.316 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.316 suites 1 1 n/a 0 0 00:04:03.316 tests 4 4 4 0 0 00:04:03.316 asserts 152 152 152 0 n/a 00:04:03.316 00:04:03.316 Elapsed time = 0.138 seconds 00:04:03.316 00:04:03.316 real 0m0.149s 00:04:03.316 user 0m0.139s 00:04:03.316 sys 0m0.010s 00:04:03.316 17:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.316 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 ************************************ 00:04:03.316 END TEST env_memory 00:04:03.316 ************************************ 00:04:03.316 17:29:24 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.316 17:29:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.316 17:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.316 17:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 ************************************ 00:04:03.316 START TEST env_vtophys 00:04:03.316 ************************************ 00:04:03.316 17:29:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.316 EAL: lib.eal log level changed from notice to debug 00:04:03.316 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.316 EAL: Detected lcore 1 as core 1 on socket 0 00:04:03.316 EAL: Detected lcore 2 as core 2 on socket 0 00:04:03.316 EAL: Detected lcore 3 as core 3 on socket 0 00:04:03.316 EAL: Detected lcore 4 as core 4 on socket 0 00:04:03.316 EAL: Detected lcore 5 as core 5 on socket 0 00:04:03.316 EAL: Detected lcore 6 as core 6 on socket 0 00:04:03.316 EAL: Detected lcore 7 as core 8 on socket 0 00:04:03.316 EAL: Detected lcore 8 as core 9 on socket 0 00:04:03.316 EAL: Detected lcore 9 as core 10 on socket 0 00:04:03.316 EAL: Detected lcore 10 as core 11 on socket 0 00:04:03.316 EAL: Detected lcore 11 as core 12 on socket 0 00:04:03.316 EAL: Detected lcore 12 as core 13 on socket 0 00:04:03.316 EAL: Detected lcore 13 as core 16 on socket 0 00:04:03.316 EAL: Detected lcore 14 as core 17 on socket 0 00:04:03.316 EAL: Detected lcore 15 as core 18 on socket 0 00:04:03.316 EAL: Detected lcore 16 as core 19 on socket 0 00:04:03.316 EAL: Detected lcore 17 as core 20 on socket 0 00:04:03.316 EAL: Detected lcore 18 as core 21 on socket 0 00:04:03.316 EAL: Detected lcore 19 as core 25 on socket 0 00:04:03.316 EAL: Detected lcore 20 as core 26 on socket 0 00:04:03.316 EAL: Detected lcore 21 as core 27 on socket 0 00:04:03.316 EAL: Detected lcore 22 as core 28 on socket 0 00:04:03.316 EAL: Detected lcore 23 as core 29 on socket 0 00:04:03.316 EAL: Detected lcore 24 as core 0 on socket 1 00:04:03.316 EAL: Detected lcore 25 as core 1 on socket 1 00:04:03.316 EAL: Detected lcore 26 as core 2 on socket 1 00:04:03.316 EAL: Detected lcore 27 as core 3 on socket 1 00:04:03.316 EAL: Detected lcore 28 as core 4 on socket 1 00:04:03.316 EAL: Detected lcore 29 as core 5 on socket 1 00:04:03.316 EAL: Detected lcore 30 as core 6 on socket 1 00:04:03.316 EAL: Detected lcore 31 as core 9 on socket 1 00:04:03.316 EAL: Detected lcore 32 as core 10 on socket 1 00:04:03.316 EAL: Detected lcore 33 as core 11 on socket 1 00:04:03.316 EAL: Detected lcore 34 as core 12 on socket 1 00:04:03.316 EAL: Detected lcore 35 as core 13 on socket 1 00:04:03.316 EAL: Detected lcore 36 as core 16 on socket 1 00:04:03.316 EAL: Detected lcore 37 as core 17 on socket 1 00:04:03.316 EAL: Detected lcore 38 as core 18 on socket 1 00:04:03.316 EAL: Detected lcore 39 as core 19 on socket 1 00:04:03.316 EAL: Detected lcore 40 as core 20 on socket 1 00:04:03.316 EAL: Detected lcore 41 as core 21 on socket 1 00:04:03.316 EAL: Detected lcore 42 as core 24 on socket 1 00:04:03.316 EAL: Detected lcore 43 as core 25 on socket 1 00:04:03.316 EAL: Detected lcore 44 as core 26 on socket 1 00:04:03.316 EAL: Detected lcore 45 as core 27 on socket 1 00:04:03.316 EAL: Detected lcore 46 as core 28 on socket 1 00:04:03.316 EAL: Detected lcore 47 as core 29 on socket 1 00:04:03.316 EAL: Detected lcore 48 as core 0 on socket 0 00:04:03.317 EAL: Detected lcore 49 as core 1 on socket 0 00:04:03.317 EAL: Detected lcore 50 as core 2 on socket 0 00:04:03.317 EAL: Detected lcore 51 as core 3 on socket 0 00:04:03.317 EAL: Detected lcore 52 as core 4 on socket 0 00:04:03.317 EAL: Detected lcore 53 as core 5 on socket 0 00:04:03.317 EAL: Detected lcore 54 as core 6 on socket 0 00:04:03.317 EAL: Detected lcore 55 as core 8 on socket 0 00:04:03.317 EAL: Detected lcore 56 as core 9 on socket 0 00:04:03.317 EAL: Detected lcore 57 as core 10 on socket 0 00:04:03.317 EAL: Detected lcore 58 as core 11 on socket 0 00:04:03.317 EAL: Detected lcore 59 as core 12 on socket 0 00:04:03.317 EAL: Detected lcore 60 as core 13 on socket 0 00:04:03.317 EAL: Detected lcore 61 as core 16 on socket 0 00:04:03.317 EAL: Detected lcore 62 as core 17 on socket 0 00:04:03.317 EAL: Detected lcore 63 as core 18 on socket 0 00:04:03.317 EAL: Detected lcore 64 as core 19 on socket 0 00:04:03.317 EAL: Detected lcore 65 as core 20 on socket 0 00:04:03.317 EAL: Detected lcore 66 as core 21 on socket 0 00:04:03.317 EAL: Detected lcore 67 as core 25 on socket 0 00:04:03.317 EAL: Detected lcore 68 as core 26 on socket 0 00:04:03.317 EAL: Detected lcore 69 as core 27 on socket 0 00:04:03.317 EAL: Detected lcore 70 as core 28 on socket 0 00:04:03.317 EAL: Detected lcore 71 as core 29 on socket 0 00:04:03.317 EAL: Detected lcore 72 as core 0 on socket 1 00:04:03.317 EAL: Detected lcore 73 as core 1 on socket 1 00:04:03.317 EAL: Detected lcore 74 as core 2 on socket 1 00:04:03.317 EAL: Detected lcore 75 as core 3 on socket 1 00:04:03.317 EAL: Detected lcore 76 as core 4 on socket 1 00:04:03.317 EAL: Detected lcore 77 as core 5 on socket 1 00:04:03.317 EAL: Detected lcore 78 as core 6 on socket 1 00:04:03.317 EAL: Detected lcore 79 as core 9 on socket 1 00:04:03.317 EAL: Detected lcore 80 as core 10 on socket 1 00:04:03.317 EAL: Detected lcore 81 as core 11 on socket 1 00:04:03.317 EAL: Detected lcore 82 as core 12 on socket 1 00:04:03.317 EAL: Detected lcore 83 as core 13 on socket 1 00:04:03.317 EAL: Detected lcore 84 as core 16 on socket 1 00:04:03.317 EAL: Detected lcore 85 as core 17 on socket 1 00:04:03.317 EAL: Detected lcore 86 as core 18 on socket 1 00:04:03.317 EAL: Detected lcore 87 as core 19 on socket 1 00:04:03.317 EAL: Detected lcore 88 as core 20 on socket 1 00:04:03.317 EAL: Detected lcore 89 as core 21 on socket 1 00:04:03.317 EAL: Detected lcore 90 as core 24 on socket 1 00:04:03.317 EAL: Detected lcore 91 as core 25 on socket 1 00:04:03.317 EAL: Detected lcore 92 as core 26 on socket 1 00:04:03.317 EAL: Detected lcore 93 as core 27 on socket 1 00:04:03.317 EAL: Detected lcore 94 as core 28 on socket 1 00:04:03.317 EAL: Detected lcore 95 as core 29 on socket 1 00:04:03.317 EAL: Maximum logical cores by configuration: 128 00:04:03.317 EAL: Detected CPU lcores: 96 00:04:03.317 EAL: Detected NUMA nodes: 2 00:04:03.317 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:03.317 EAL: Detected shared linkage of DPDK 00:04:03.317 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.317 EAL: Bus pci wants IOVA as 'DC' 00:04:03.317 EAL: Buses did not request a specific IOVA mode. 00:04:03.317 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:03.317 EAL: Selected IOVA mode 'VA' 00:04:03.317 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.317 EAL: Probing VFIO support... 00:04:03.317 EAL: IOMMU type 1 (Type 1) is supported 00:04:03.317 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:03.317 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:03.317 EAL: VFIO support initialized 00:04:03.317 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.317 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.317 EAL: Setting up physically contiguous memory... 00:04:03.317 EAL: Setting maximum number of open files to 524288 00:04:03.317 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.317 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:03.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:03.317 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.317 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:03.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.317 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.317 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:03.317 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:03.317 EAL: Hugepages will be freed exactly as allocated. 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: TSC frequency is ~2300000 KHz 00:04:03.317 EAL: Main lcore 0 is ready (tid=7fa735a62a00;cpuset=[0]) 00:04:03.317 EAL: Trying to obtain current memory policy. 00:04:03.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.317 EAL: Restoring previous memory policy: 0 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.317 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.317 00:04:03.317 00:04:03.317 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.317 http://cunit.sourceforge.net/ 00:04:03.317 00:04:03.317 00:04:03.317 Suite: components_suite 00:04:03.317 Test: vtophys_malloc_test ...passed 00:04:03.317 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.317 EAL: Restoring previous memory policy: 4 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.317 EAL: Trying to obtain current memory policy. 00:04:03.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.317 EAL: Restoring previous memory policy: 4 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.317 EAL: Trying to obtain current memory policy. 00:04:03.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.317 EAL: Restoring previous memory policy: 4 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.317 EAL: Trying to obtain current memory policy. 00:04:03.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.317 EAL: Restoring previous memory policy: 4 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.317 EAL: request: mp_malloc_sync 00:04:03.317 EAL: No shared files mode enabled, IPC is disabled 00:04:03.317 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.318 EAL: Trying to obtain current memory policy. 00:04:03.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.318 EAL: Restoring previous memory policy: 4 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.318 EAL: Trying to obtain current memory policy. 00:04:03.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.318 EAL: Restoring previous memory policy: 4 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.318 EAL: Trying to obtain current memory policy. 00:04:03.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.318 EAL: Restoring previous memory policy: 4 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.318 EAL: request: mp_malloc_sync 00:04:03.318 EAL: No shared files mode enabled, IPC is disabled 00:04:03.318 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.579 EAL: request: mp_malloc_sync 00:04:03.579 EAL: No shared files mode enabled, IPC is disabled 00:04:03.579 EAL: Heap on socket 0 was shrunk by 130MB 00:04:03.579 EAL: Trying to obtain current memory policy. 00:04:03.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.579 EAL: Restoring previous memory policy: 4 00:04:03.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.579 EAL: request: mp_malloc_sync 00:04:03.579 EAL: No shared files mode enabled, IPC is disabled 00:04:03.579 EAL: Heap on socket 0 was expanded by 258MB 00:04:03.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.579 EAL: request: mp_malloc_sync 00:04:03.579 EAL: No shared files mode enabled, IPC is disabled 00:04:03.579 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.579 EAL: Trying to obtain current memory policy. 00:04:03.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.579 EAL: Restoring previous memory policy: 4 00:04:03.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.579 EAL: request: mp_malloc_sync 00:04:03.579 EAL: No shared files mode enabled, IPC is disabled 00:04:03.579 EAL: Heap on socket 0 was expanded by 514MB 00:04:03.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.839 EAL: request: mp_malloc_sync 00:04:03.839 EAL: No shared files mode enabled, IPC is disabled 00:04:03.839 EAL: Heap on socket 0 was shrunk by 514MB 00:04:03.839 EAL: Trying to obtain current memory policy. 00:04:03.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.099 EAL: Restoring previous memory policy: 4 00:04:04.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.099 EAL: request: mp_malloc_sync 00:04:04.099 EAL: No shared files mode enabled, IPC is disabled 00:04:04.099 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.358 EAL: request: mp_malloc_sync 00:04:04.358 EAL: No shared files mode enabled, IPC is disabled 00:04:04.358 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.359 passed 00:04:04.359 00:04:04.359 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.359 suites 1 1 n/a 0 0 00:04:04.359 tests 2 2 2 0 0 00:04:04.359 asserts 497 497 497 0 n/a 00:04:04.359 00:04:04.359 Elapsed time = 0.958 seconds 00:04:04.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.359 EAL: request: mp_malloc_sync 00:04:04.359 EAL: No shared files mode enabled, IPC is disabled 00:04:04.359 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.359 EAL: No shared files mode enabled, IPC is disabled 00:04:04.359 EAL: No shared files mode enabled, IPC is disabled 00:04:04.359 EAL: No shared files mode enabled, IPC is disabled 00:04:04.359 00:04:04.359 real 0m1.070s 00:04:04.359 user 0m0.634s 00:04:04.359 sys 0m0.406s 00:04:04.359 17:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.359 17:29:25 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 END TEST env_vtophys 00:04:04.359 ************************************ 00:04:04.359 17:29:25 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.359 17:29:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.359 17:29:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.359 17:29:25 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 START TEST env_pci 00:04:04.359 ************************************ 00:04:04.359 17:29:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.359 00:04:04.359 00:04:04.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.359 http://cunit.sourceforge.net/ 00:04:04.359 00:04:04.359 00:04:04.359 Suite: pci 00:04:04.359 Test: pci_hook ...[2024-07-24 17:29:25.872676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 423505 has claimed it 00:04:04.359 EAL: Cannot find device (10000:00:01.0) 00:04:04.359 EAL: Failed to attach device on primary process 00:04:04.359 passed 00:04:04.359 00:04:04.359 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.359 suites 1 1 n/a 0 0 00:04:04.359 tests 1 1 1 0 0 00:04:04.359 asserts 25 25 25 0 n/a 00:04:04.359 00:04:04.359 Elapsed time = 0.025 seconds 00:04:04.359 00:04:04.359 real 0m0.044s 00:04:04.359 user 0m0.012s 00:04:04.359 sys 0m0.032s 00:04:04.359 17:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.359 17:29:25 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 END TEST env_pci 00:04:04.359 ************************************ 00:04:04.359 17:29:25 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.359 17:29:25 -- env/env.sh@15 -- # uname 00:04:04.359 17:29:25 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.359 17:29:25 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.359 17:29:25 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.359 17:29:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:04.359 17:29:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.359 17:29:25 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 START TEST env_dpdk_post_init 00:04:04.359 ************************************ 00:04:04.359 17:29:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.619 EAL: Detected CPU lcores: 96 00:04:04.619 EAL: Detected NUMA nodes: 2 00:04:04.619 EAL: Detected shared linkage of DPDK 00:04:04.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.619 EAL: Selected IOVA mode 'VA' 00:04:04.619 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.619 EAL: VFIO support initialized 00:04:04.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.619 EAL: Using IOMMU type 1 (Type 1) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:04.619 EAL: Ignore mapping IO port bar(1) 00:04:04.619 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:05.557 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:05.557 EAL: Ignore mapping IO port bar(1) 00:04:05.557 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:08.851 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:08.851 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:08.851 Starting DPDK initialization... 00:04:08.851 Starting SPDK post initialization... 00:04:08.851 SPDK NVMe probe 00:04:08.851 Attaching to 0000:5e:00.0 00:04:08.851 Attached to 0000:5e:00.0 00:04:08.851 Cleaning up... 00:04:08.851 00:04:08.851 real 0m4.322s 00:04:08.851 user 0m3.291s 00:04:08.851 sys 0m0.108s 00:04:08.851 17:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.851 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:08.851 ************************************ 00:04:08.851 END TEST env_dpdk_post_init 00:04:08.851 ************************************ 00:04:08.851 17:29:30 -- env/env.sh@26 -- # uname 00:04:08.851 17:29:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.851 17:29:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.851 17:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.851 17:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.851 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:08.851 ************************************ 00:04:08.851 START TEST env_mem_callbacks 00:04:08.851 ************************************ 00:04:08.851 17:29:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.851 EAL: Detected CPU lcores: 96 00:04:08.851 EAL: Detected NUMA nodes: 2 00:04:08.851 EAL: Detected shared linkage of DPDK 00:04:08.851 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.851 EAL: Selected IOVA mode 'VA' 00:04:08.851 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.851 EAL: VFIO support initialized 00:04:08.851 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.851 00:04:08.851 00:04:08.851 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.851 http://cunit.sourceforge.net/ 00:04:08.851 00:04:08.851 00:04:08.851 Suite: memory 00:04:08.851 Test: test ... 00:04:08.851 register 0x200000200000 2097152 00:04:08.851 malloc 3145728 00:04:08.851 register 0x200000400000 4194304 00:04:08.851 buf 0x200000500000 len 3145728 PASSED 00:04:08.851 malloc 64 00:04:08.851 buf 0x2000004fff40 len 64 PASSED 00:04:08.851 malloc 4194304 00:04:08.851 register 0x200000800000 6291456 00:04:08.851 buf 0x200000a00000 len 4194304 PASSED 00:04:08.851 free 0x200000500000 3145728 00:04:08.851 free 0x2000004fff40 64 00:04:08.851 unregister 0x200000400000 4194304 PASSED 00:04:08.851 free 0x200000a00000 4194304 00:04:08.851 unregister 0x200000800000 6291456 PASSED 00:04:08.851 malloc 8388608 00:04:08.851 register 0x200000400000 10485760 00:04:08.851 buf 0x200000600000 len 8388608 PASSED 00:04:08.851 free 0x200000600000 8388608 00:04:08.851 unregister 0x200000400000 10485760 PASSED 00:04:08.851 passed 00:04:08.851 00:04:08.851 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.851 suites 1 1 n/a 0 0 00:04:08.851 tests 1 1 1 0 0 00:04:08.851 asserts 15 15 15 0 n/a 00:04:08.851 00:04:08.851 Elapsed time = 0.005 seconds 00:04:08.851 00:04:08.851 real 0m0.054s 00:04:08.851 user 0m0.019s 00:04:08.851 sys 0m0.035s 00:04:08.851 17:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.851 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:08.851 ************************************ 00:04:08.851 END TEST env_mem_callbacks 00:04:08.851 ************************************ 00:04:08.851 00:04:08.851 real 0m5.923s 00:04:08.851 user 0m4.207s 00:04:08.851 sys 0m0.793s 00:04:08.851 17:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.851 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:08.851 ************************************ 00:04:08.851 END TEST env 00:04:08.851 ************************************ 00:04:08.851 17:29:30 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.851 17:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.851 17:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.851 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:08.851 ************************************ 00:04:08.851 START TEST rpc 00:04:08.851 ************************************ 00:04:08.851 17:29:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:09.112 * Looking for test storage... 00:04:09.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:09.112 17:29:30 -- rpc/rpc.sh@65 -- # spdk_pid=424329 00:04:09.112 17:29:30 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.112 17:29:30 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:09.112 17:29:30 -- rpc/rpc.sh@67 -- # waitforlisten 424329 00:04:09.112 17:29:30 -- common/autotest_common.sh@819 -- # '[' -z 424329 ']' 00:04:09.112 17:29:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.112 17:29:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:09.112 17:29:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.112 17:29:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:09.112 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:04:09.112 [2024-07-24 17:29:30.559095] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:09.112 [2024-07-24 17:29:30.559149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424329 ] 00:04:09.112 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.112 [2024-07-24 17:29:30.614892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.112 [2024-07-24 17:29:30.686478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:09.112 [2024-07-24 17:29:30.686607] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:09.112 [2024-07-24 17:29:30.686616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 424329' to capture a snapshot of events at runtime. 00:04:09.112 [2024-07-24 17:29:30.686623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid424329 for offline analysis/debug. 00:04:09.112 [2024-07-24 17:29:30.686641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.052 17:29:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:10.052 17:29:31 -- common/autotest_common.sh@852 -- # return 0 00:04:10.052 17:29:31 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.052 17:29:31 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.052 17:29:31 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:10.052 17:29:31 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:10.052 17:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.052 17:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 ************************************ 00:04:10.052 START TEST rpc_integrity 00:04:10.052 ************************************ 00:04:10.052 17:29:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:10.052 17:29:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.052 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.052 17:29:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.052 17:29:31 -- rpc/rpc.sh@13 -- # jq length 00:04:10.052 17:29:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.052 17:29:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.052 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.052 17:29:31 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:10.052 17:29:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.052 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.052 17:29:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.052 { 00:04:10.052 "name": "Malloc0", 00:04:10.052 "aliases": [ 00:04:10.052 "088fd5ab-3f43-4505-aa7b-105480c3062c" 00:04:10.052 ], 00:04:10.052 "product_name": "Malloc disk", 00:04:10.052 "block_size": 512, 00:04:10.052 "num_blocks": 16384, 00:04:10.052 "uuid": "088fd5ab-3f43-4505-aa7b-105480c3062c", 00:04:10.052 "assigned_rate_limits": { 00:04:10.052 "rw_ios_per_sec": 0, 00:04:10.052 "rw_mbytes_per_sec": 0, 00:04:10.052 "r_mbytes_per_sec": 0, 00:04:10.052 "w_mbytes_per_sec": 0 00:04:10.052 }, 00:04:10.052 "claimed": false, 00:04:10.052 "zoned": false, 00:04:10.052 "supported_io_types": { 00:04:10.052 "read": true, 00:04:10.052 "write": true, 00:04:10.052 "unmap": true, 00:04:10.052 "write_zeroes": true, 00:04:10.052 "flush": true, 00:04:10.052 "reset": true, 00:04:10.052 "compare": false, 00:04:10.052 "compare_and_write": false, 00:04:10.052 "abort": true, 00:04:10.052 "nvme_admin": false, 00:04:10.052 "nvme_io": false 00:04:10.052 }, 00:04:10.052 "memory_domains": [ 00:04:10.052 { 00:04:10.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.052 "dma_device_type": 2 00:04:10.052 } 00:04:10.052 ], 00:04:10.052 "driver_specific": {} 00:04:10.052 } 00:04:10.052 ]' 00:04:10.052 17:29:31 -- rpc/rpc.sh@17 -- # jq length 00:04:10.052 17:29:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.052 17:29:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:10.052 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 [2024-07-24 17:29:31.478540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:10.052 [2024-07-24 17:29:31.478574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.052 [2024-07-24 17:29:31.478587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d0a860 00:04:10.052 [2024-07-24 17:29:31.478593] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.052 [2024-07-24 17:29:31.479690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.052 [2024-07-24 17:29:31.479711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.052 Passthru0 00:04:10.052 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.052 17:29:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.052 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.052 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.052 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.052 17:29:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.052 { 00:04:10.052 "name": "Malloc0", 00:04:10.052 "aliases": [ 00:04:10.052 "088fd5ab-3f43-4505-aa7b-105480c3062c" 00:04:10.052 ], 00:04:10.052 "product_name": "Malloc disk", 00:04:10.052 "block_size": 512, 00:04:10.052 "num_blocks": 16384, 00:04:10.052 "uuid": "088fd5ab-3f43-4505-aa7b-105480c3062c", 00:04:10.052 "assigned_rate_limits": { 00:04:10.052 "rw_ios_per_sec": 0, 00:04:10.052 "rw_mbytes_per_sec": 0, 00:04:10.052 "r_mbytes_per_sec": 0, 00:04:10.052 "w_mbytes_per_sec": 0 00:04:10.052 }, 00:04:10.052 "claimed": true, 00:04:10.052 "claim_type": "exclusive_write", 00:04:10.052 "zoned": false, 00:04:10.052 "supported_io_types": { 00:04:10.053 "read": true, 00:04:10.053 "write": true, 00:04:10.053 "unmap": true, 00:04:10.053 "write_zeroes": true, 00:04:10.053 "flush": true, 00:04:10.053 "reset": true, 00:04:10.053 "compare": false, 00:04:10.053 "compare_and_write": false, 00:04:10.053 "abort": true, 00:04:10.053 "nvme_admin": false, 00:04:10.053 "nvme_io": false 00:04:10.053 }, 00:04:10.053 "memory_domains": [ 00:04:10.053 { 00:04:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.053 "dma_device_type": 2 00:04:10.053 } 00:04:10.053 ], 00:04:10.053 "driver_specific": {} 00:04:10.053 }, 00:04:10.053 { 00:04:10.053 "name": "Passthru0", 00:04:10.053 "aliases": [ 00:04:10.053 "f2e1c95c-c7cb-5305-bdce-5ccc55707236" 00:04:10.053 ], 00:04:10.053 "product_name": "passthru", 00:04:10.053 "block_size": 512, 00:04:10.053 "num_blocks": 16384, 00:04:10.053 "uuid": "f2e1c95c-c7cb-5305-bdce-5ccc55707236", 00:04:10.053 "assigned_rate_limits": { 00:04:10.053 "rw_ios_per_sec": 0, 00:04:10.053 "rw_mbytes_per_sec": 0, 00:04:10.053 "r_mbytes_per_sec": 0, 00:04:10.053 "w_mbytes_per_sec": 0 00:04:10.053 }, 00:04:10.053 "claimed": false, 00:04:10.053 "zoned": false, 00:04:10.053 "supported_io_types": { 00:04:10.053 "read": true, 00:04:10.053 "write": true, 00:04:10.053 "unmap": true, 00:04:10.053 "write_zeroes": true, 00:04:10.053 "flush": true, 00:04:10.053 "reset": true, 00:04:10.053 "compare": false, 00:04:10.053 "compare_and_write": false, 00:04:10.053 "abort": true, 00:04:10.053 "nvme_admin": false, 00:04:10.053 "nvme_io": false 00:04:10.053 }, 00:04:10.053 "memory_domains": [ 00:04:10.053 { 00:04:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.053 "dma_device_type": 2 00:04:10.053 } 00:04:10.053 ], 00:04:10.053 "driver_specific": { 00:04:10.053 "passthru": { 00:04:10.053 "name": "Passthru0", 00:04:10.053 "base_bdev_name": "Malloc0" 00:04:10.053 } 00:04:10.053 } 00:04:10.053 } 00:04:10.053 ]' 00:04:10.053 17:29:31 -- rpc/rpc.sh@21 -- # jq length 00:04:10.053 17:29:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.053 17:29:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.053 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.053 17:29:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:10.053 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.053 17:29:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.053 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.053 17:29:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.053 17:29:31 -- rpc/rpc.sh@26 -- # jq length 00:04:10.053 17:29:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.053 00:04:10.053 real 0m0.234s 00:04:10.053 user 0m0.139s 00:04:10.053 sys 0m0.034s 00:04:10.053 17:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 ************************************ 00:04:10.053 END TEST rpc_integrity 00:04:10.053 ************************************ 00:04:10.053 17:29:31 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:10.053 17:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.053 17:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 ************************************ 00:04:10.053 START TEST rpc_plugins 00:04:10.053 ************************************ 00:04:10.053 17:29:31 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:10.053 17:29:31 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:10.053 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.053 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.053 17:29:31 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:10.053 17:29:31 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:10.053 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.053 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.313 17:29:31 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:10.313 { 00:04:10.313 "name": "Malloc1", 00:04:10.313 "aliases": [ 00:04:10.313 "cd85e82d-0e40-4721-9e9e-a07659166dc8" 00:04:10.313 ], 00:04:10.313 "product_name": "Malloc disk", 00:04:10.313 "block_size": 4096, 00:04:10.313 "num_blocks": 256, 00:04:10.313 "uuid": "cd85e82d-0e40-4721-9e9e-a07659166dc8", 00:04:10.313 "assigned_rate_limits": { 00:04:10.313 "rw_ios_per_sec": 0, 00:04:10.313 "rw_mbytes_per_sec": 0, 00:04:10.313 "r_mbytes_per_sec": 0, 00:04:10.313 "w_mbytes_per_sec": 0 00:04:10.313 }, 00:04:10.313 "claimed": false, 00:04:10.313 "zoned": false, 00:04:10.313 "supported_io_types": { 00:04:10.313 "read": true, 00:04:10.313 "write": true, 00:04:10.313 "unmap": true, 00:04:10.313 "write_zeroes": true, 00:04:10.313 "flush": true, 00:04:10.313 "reset": true, 00:04:10.313 "compare": false, 00:04:10.313 "compare_and_write": false, 00:04:10.313 "abort": true, 00:04:10.313 "nvme_admin": false, 00:04:10.313 "nvme_io": false 00:04:10.313 }, 00:04:10.313 "memory_domains": [ 00:04:10.313 { 00:04:10.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.313 "dma_device_type": 2 00:04:10.313 } 00:04:10.313 ], 00:04:10.313 "driver_specific": {} 00:04:10.313 } 00:04:10.313 ]' 00:04:10.313 17:29:31 -- rpc/rpc.sh@32 -- # jq length 00:04:10.313 17:29:31 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:10.313 17:29:31 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:10.313 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.313 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.313 17:29:31 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:10.313 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.313 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.313 17:29:31 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:10.313 17:29:31 -- rpc/rpc.sh@36 -- # jq length 00:04:10.313 17:29:31 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:10.313 00:04:10.313 real 0m0.122s 00:04:10.313 user 0m0.072s 00:04:10.313 sys 0m0.014s 00:04:10.313 17:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.313 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 ************************************ 00:04:10.313 END TEST rpc_plugins 00:04:10.313 ************************************ 00:04:10.313 17:29:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:10.313 17:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.313 17:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.313 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 ************************************ 00:04:10.313 START TEST rpc_trace_cmd_test 00:04:10.313 ************************************ 00:04:10.313 17:29:31 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:10.313 17:29:31 -- rpc/rpc.sh@40 -- # local info 00:04:10.313 17:29:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:10.313 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.313 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.313 17:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.313 17:29:31 -- rpc/rpc.sh@42 -- # info='{ 00:04:10.313 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid424329", 00:04:10.313 "tpoint_group_mask": "0x8", 00:04:10.313 "iscsi_conn": { 00:04:10.313 "mask": "0x2", 00:04:10.313 "tpoint_mask": "0x0" 00:04:10.313 }, 00:04:10.313 "scsi": { 00:04:10.313 "mask": "0x4", 00:04:10.313 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "bdev": { 00:04:10.314 "mask": "0x8", 00:04:10.314 "tpoint_mask": "0xffffffffffffffff" 00:04:10.314 }, 00:04:10.314 "nvmf_rdma": { 00:04:10.314 "mask": "0x10", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "nvmf_tcp": { 00:04:10.314 "mask": "0x20", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "ftl": { 00:04:10.314 "mask": "0x40", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "blobfs": { 00:04:10.314 "mask": "0x80", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "dsa": { 00:04:10.314 "mask": "0x200", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "thread": { 00:04:10.314 "mask": "0x400", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "nvme_pcie": { 00:04:10.314 "mask": "0x800", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "iaa": { 00:04:10.314 "mask": "0x1000", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "nvme_tcp": { 00:04:10.314 "mask": "0x2000", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 }, 00:04:10.314 "bdev_nvme": { 00:04:10.314 "mask": "0x4000", 00:04:10.314 "tpoint_mask": "0x0" 00:04:10.314 } 00:04:10.314 }' 00:04:10.314 17:29:31 -- rpc/rpc.sh@43 -- # jq length 00:04:10.314 17:29:31 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:10.314 17:29:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:10.314 17:29:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:10.314 17:29:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:10.314 17:29:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:10.314 17:29:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:10.574 17:29:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:10.574 17:29:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:10.574 17:29:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:10.574 00:04:10.574 real 0m0.174s 00:04:10.574 user 0m0.149s 00:04:10.574 sys 0m0.017s 00:04:10.574 17:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.574 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 ************************************ 00:04:10.574 END TEST rpc_trace_cmd_test 00:04:10.574 ************************************ 00:04:10.574 17:29:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:10.574 17:29:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:10.574 17:29:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:10.574 17:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.574 17:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.574 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 ************************************ 00:04:10.574 START TEST rpc_daemon_integrity 00:04:10.574 ************************************ 00:04:10.574 17:29:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:10.574 17:29:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.574 17:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.574 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.574 17:29:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.574 17:29:32 -- rpc/rpc.sh@13 -- # jq length 00:04:10.574 17:29:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.574 17:29:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.574 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.574 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.574 17:29:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:10.574 17:29:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.574 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.574 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.574 17:29:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.574 { 00:04:10.574 "name": "Malloc2", 00:04:10.574 "aliases": [ 00:04:10.574 "07488e36-c363-48d6-bdbf-ab8b7d161229" 00:04:10.574 ], 00:04:10.574 "product_name": "Malloc disk", 00:04:10.574 "block_size": 512, 00:04:10.574 "num_blocks": 16384, 00:04:10.574 "uuid": "07488e36-c363-48d6-bdbf-ab8b7d161229", 00:04:10.574 "assigned_rate_limits": { 00:04:10.574 "rw_ios_per_sec": 0, 00:04:10.574 "rw_mbytes_per_sec": 0, 00:04:10.574 "r_mbytes_per_sec": 0, 00:04:10.574 "w_mbytes_per_sec": 0 00:04:10.574 }, 00:04:10.574 "claimed": false, 00:04:10.574 "zoned": false, 00:04:10.574 "supported_io_types": { 00:04:10.574 "read": true, 00:04:10.574 "write": true, 00:04:10.574 "unmap": true, 00:04:10.574 "write_zeroes": true, 00:04:10.574 "flush": true, 00:04:10.574 "reset": true, 00:04:10.574 "compare": false, 00:04:10.574 "compare_and_write": false, 00:04:10.574 "abort": true, 00:04:10.574 "nvme_admin": false, 00:04:10.574 "nvme_io": false 00:04:10.574 }, 00:04:10.574 "memory_domains": [ 00:04:10.574 { 00:04:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.574 "dma_device_type": 2 00:04:10.574 } 00:04:10.574 ], 00:04:10.574 "driver_specific": {} 00:04:10.574 } 00:04:10.574 ]' 00:04:10.574 17:29:32 -- rpc/rpc.sh@17 -- # jq length 00:04:10.574 17:29:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.574 17:29:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:10.574 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.574 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 [2024-07-24 17:29:32.124347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:10.574 [2024-07-24 17:29:32.124376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.574 [2024-07-24 17:29:32.124389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d0b360 00:04:10.574 [2024-07-24 17:29:32.124396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.574 [2024-07-24 17:29:32.125360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.574 [2024-07-24 17:29:32.125380] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.574 Passthru0 00:04:10.574 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.574 17:29:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.574 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.574 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.574 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.574 17:29:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.574 { 00:04:10.574 "name": "Malloc2", 00:04:10.574 "aliases": [ 00:04:10.574 "07488e36-c363-48d6-bdbf-ab8b7d161229" 00:04:10.574 ], 00:04:10.574 "product_name": "Malloc disk", 00:04:10.574 "block_size": 512, 00:04:10.574 "num_blocks": 16384, 00:04:10.574 "uuid": "07488e36-c363-48d6-bdbf-ab8b7d161229", 00:04:10.574 "assigned_rate_limits": { 00:04:10.574 "rw_ios_per_sec": 0, 00:04:10.574 "rw_mbytes_per_sec": 0, 00:04:10.574 "r_mbytes_per_sec": 0, 00:04:10.574 "w_mbytes_per_sec": 0 00:04:10.574 }, 00:04:10.574 "claimed": true, 00:04:10.574 "claim_type": "exclusive_write", 00:04:10.574 "zoned": false, 00:04:10.574 "supported_io_types": { 00:04:10.574 "read": true, 00:04:10.574 "write": true, 00:04:10.574 "unmap": true, 00:04:10.574 "write_zeroes": true, 00:04:10.574 "flush": true, 00:04:10.574 "reset": true, 00:04:10.574 "compare": false, 00:04:10.574 "compare_and_write": false, 00:04:10.574 "abort": true, 00:04:10.574 "nvme_admin": false, 00:04:10.574 "nvme_io": false 00:04:10.574 }, 00:04:10.574 "memory_domains": [ 00:04:10.574 { 00:04:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.574 "dma_device_type": 2 00:04:10.574 } 00:04:10.574 ], 00:04:10.574 "driver_specific": {} 00:04:10.574 }, 00:04:10.574 { 00:04:10.574 "name": "Passthru0", 00:04:10.574 "aliases": [ 00:04:10.574 "1ed15cc9-56c0-5258-8643-0bd548a51435" 00:04:10.574 ], 00:04:10.574 "product_name": "passthru", 00:04:10.574 "block_size": 512, 00:04:10.574 "num_blocks": 16384, 00:04:10.574 "uuid": "1ed15cc9-56c0-5258-8643-0bd548a51435", 00:04:10.574 "assigned_rate_limits": { 00:04:10.574 "rw_ios_per_sec": 0, 00:04:10.574 "rw_mbytes_per_sec": 0, 00:04:10.574 "r_mbytes_per_sec": 0, 00:04:10.574 "w_mbytes_per_sec": 0 00:04:10.574 }, 00:04:10.574 "claimed": false, 00:04:10.574 "zoned": false, 00:04:10.574 "supported_io_types": { 00:04:10.574 "read": true, 00:04:10.574 "write": true, 00:04:10.574 "unmap": true, 00:04:10.574 "write_zeroes": true, 00:04:10.574 "flush": true, 00:04:10.574 "reset": true, 00:04:10.574 "compare": false, 00:04:10.574 "compare_and_write": false, 00:04:10.574 "abort": true, 00:04:10.574 "nvme_admin": false, 00:04:10.574 "nvme_io": false 00:04:10.574 }, 00:04:10.574 "memory_domains": [ 00:04:10.574 { 00:04:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.574 "dma_device_type": 2 00:04:10.574 } 00:04:10.574 ], 00:04:10.574 "driver_specific": { 00:04:10.574 "passthru": { 00:04:10.574 "name": "Passthru0", 00:04:10.574 "base_bdev_name": "Malloc2" 00:04:10.574 } 00:04:10.574 } 00:04:10.574 } 00:04:10.574 ]' 00:04:10.574 17:29:32 -- rpc/rpc.sh@21 -- # jq length 00:04:10.835 17:29:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.835 17:29:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.835 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.835 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.835 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.835 17:29:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.835 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.835 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.835 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.835 17:29:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.835 17:29:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.835 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.835 17:29:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.835 17:29:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.835 17:29:32 -- rpc/rpc.sh@26 -- # jq length 00:04:10.835 17:29:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.835 00:04:10.835 real 0m0.254s 00:04:10.835 user 0m0.164s 00:04:10.835 sys 0m0.028s 00:04:10.835 17:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.835 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.835 ************************************ 00:04:10.835 END TEST rpc_daemon_integrity 00:04:10.835 ************************************ 00:04:10.835 17:29:32 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.835 17:29:32 -- rpc/rpc.sh@84 -- # killprocess 424329 00:04:10.835 17:29:32 -- common/autotest_common.sh@926 -- # '[' -z 424329 ']' 00:04:10.835 17:29:32 -- common/autotest_common.sh@930 -- # kill -0 424329 00:04:10.835 17:29:32 -- common/autotest_common.sh@931 -- # uname 00:04:10.835 17:29:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:10.835 17:29:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 424329 00:04:10.835 17:29:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:10.835 17:29:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:10.835 17:29:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 424329' 00:04:10.835 killing process with pid 424329 00:04:10.835 17:29:32 -- common/autotest_common.sh@945 -- # kill 424329 00:04:10.835 17:29:32 -- common/autotest_common.sh@950 -- # wait 424329 00:04:11.095 00:04:11.095 real 0m2.223s 00:04:11.095 user 0m2.791s 00:04:11.095 sys 0m0.570s 00:04:11.095 17:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.095 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.095 ************************************ 00:04:11.095 END TEST rpc 00:04:11.095 ************************************ 00:04:11.095 17:29:32 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.095 17:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.095 17:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.095 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.355 ************************************ 00:04:11.355 START TEST rpc_client 00:04:11.355 ************************************ 00:04:11.355 17:29:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.355 * Looking for test storage... 00:04:11.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:11.355 17:29:32 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.355 OK 00:04:11.355 17:29:32 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.355 00:04:11.355 real 0m0.089s 00:04:11.355 user 0m0.032s 00:04:11.355 sys 0m0.063s 00:04:11.355 17:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.355 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.355 ************************************ 00:04:11.355 END TEST rpc_client 00:04:11.355 ************************************ 00:04:11.355 17:29:32 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.355 17:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.355 17:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.355 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.355 ************************************ 00:04:11.356 START TEST json_config 00:04:11.356 ************************************ 00:04:11.356 17:29:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.356 17:29:32 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.356 17:29:32 -- nvmf/common.sh@7 -- # uname -s 00:04:11.356 17:29:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.356 17:29:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.356 17:29:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.356 17:29:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.356 17:29:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.356 17:29:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.356 17:29:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.356 17:29:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.356 17:29:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.356 17:29:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.356 17:29:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.356 17:29:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.356 17:29:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.356 17:29:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.356 17:29:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.356 17:29:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.356 17:29:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.356 17:29:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.356 17:29:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.356 17:29:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 17:29:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 17:29:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 17:29:32 -- paths/export.sh@5 -- # export PATH 00:04:11.356 17:29:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.356 17:29:32 -- nvmf/common.sh@46 -- # : 0 00:04:11.356 17:29:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:11.356 17:29:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:11.356 17:29:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:11.356 17:29:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.356 17:29:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.356 17:29:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:11.356 17:29:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:11.356 17:29:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:11.356 17:29:32 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.356 17:29:32 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.356 17:29:32 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:11.356 17:29:32 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.356 17:29:32 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:11.356 17:29:32 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.356 17:29:32 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:11.356 17:29:32 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.356 17:29:32 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:11.356 17:29:32 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:11.356 17:29:32 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.356 17:29:32 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:11.356 INFO: JSON configuration test init 00:04:11.356 17:29:32 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:11.356 17:29:32 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:11.356 17:29:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.356 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 17:29:32 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:11.356 17:29:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.356 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 17:29:32 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.356 17:29:32 -- json_config/json_config.sh@98 -- # local app=target 00:04:11.356 17:29:32 -- json_config/json_config.sh@99 -- # shift 00:04:11.356 17:29:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:11.356 17:29:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.356 17:29:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=424996 00:04:11.356 17:29:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:11.356 Waiting for target to run... 00:04:11.356 17:29:32 -- json_config/json_config.sh@114 -- # waitforlisten 424996 /var/tmp/spdk_tgt.sock 00:04:11.356 17:29:32 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.356 17:29:32 -- common/autotest_common.sh@819 -- # '[' -z 424996 ']' 00:04:11.356 17:29:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.356 17:29:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:11.356 17:29:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.356 17:29:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:11.356 17:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 [2024-07-24 17:29:32.975331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:11.616 [2024-07-24 17:29:32.975379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424996 ] 00:04:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.876 [2024-07-24 17:29:33.400162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.136 [2024-07-24 17:29:33.490299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:12.136 [2024-07-24 17:29:33.490421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.395 17:29:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:12.395 17:29:33 -- common/autotest_common.sh@852 -- # return 0 00:04:12.395 17:29:33 -- json_config/json_config.sh@115 -- # echo '' 00:04:12.395 00:04:12.395 17:29:33 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:12.395 17:29:33 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:12.395 17:29:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:12.395 17:29:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.395 17:29:33 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:12.395 17:29:33 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:12.395 17:29:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:12.395 17:29:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.395 17:29:33 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:12.395 17:29:33 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:12.395 17:29:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.734 17:29:36 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:15.734 17:29:36 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:15.734 17:29:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.734 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:04:15.734 17:29:36 -- json_config/json_config.sh@48 -- # local ret=0 00:04:15.734 17:29:36 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.734 17:29:36 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:15.734 17:29:36 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:15.734 17:29:36 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:15.734 17:29:36 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.734 17:29:37 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.734 17:29:37 -- json_config/json_config.sh@51 -- # local get_types 00:04:15.734 17:29:37 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:15.734 17:29:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.734 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:04:15.734 17:29:37 -- json_config/json_config.sh@58 -- # return 0 00:04:15.734 17:29:37 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:15.734 17:29:37 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:15.734 17:29:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.734 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:04:15.734 17:29:37 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.734 17:29:37 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:15.734 17:29:37 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.734 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.734 MallocForNvmf0 00:04:15.734 17:29:37 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.734 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.994 MallocForNvmf1 00:04:15.994 17:29:37 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.994 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.994 [2024-07-24 17:29:37.524880] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.994 17:29:37 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.994 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.253 17:29:37 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.253 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.512 17:29:37 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.512 17:29:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.512 17:29:38 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.512 17:29:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.772 [2024-07-24 17:29:38.203056] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.772 17:29:38 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:16.772 17:29:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:16.772 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 17:29:38 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:16.772 17:29:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:16.772 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:16.772 17:29:38 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:16.772 17:29:38 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.772 17:29:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.031 MallocBdevForConfigChangeCheck 00:04:17.031 17:29:38 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:17.031 17:29:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:17.031 17:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:17.031 17:29:38 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:17.031 17:29:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.290 17:29:38 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:17.290 INFO: shutting down applications... 00:04:17.290 17:29:38 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:17.290 17:29:38 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:17.290 17:29:38 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:17.290 17:29:38 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.199 Calling clear_iscsi_subsystem 00:04:19.199 Calling clear_nvmf_subsystem 00:04:19.199 Calling clear_nbd_subsystem 00:04:19.199 Calling clear_ublk_subsystem 00:04:19.199 Calling clear_vhost_blk_subsystem 00:04:19.199 Calling clear_vhost_scsi_subsystem 00:04:19.199 Calling clear_scheduler_subsystem 00:04:19.199 Calling clear_bdev_subsystem 00:04:19.199 Calling clear_accel_subsystem 00:04:19.199 Calling clear_vmd_subsystem 00:04:19.199 Calling clear_sock_subsystem 00:04:19.200 Calling clear_iobuf_subsystem 00:04:19.200 17:29:40 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:19.200 17:29:40 -- json_config/json_config.sh@396 -- # count=100 00:04:19.200 17:29:40 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:19.200 17:29:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.200 17:29:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.200 17:29:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.200 17:29:40 -- json_config/json_config.sh@398 -- # break 00:04:19.200 17:29:40 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:19.200 17:29:40 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:19.200 17:29:40 -- json_config/json_config.sh@120 -- # local app=target 00:04:19.200 17:29:40 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:19.200 17:29:40 -- json_config/json_config.sh@124 -- # [[ -n 424996 ]] 00:04:19.200 17:29:40 -- json_config/json_config.sh@127 -- # kill -SIGINT 424996 00:04:19.200 17:29:40 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:19.200 17:29:40 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:19.200 17:29:40 -- json_config/json_config.sh@130 -- # kill -0 424996 00:04:19.200 17:29:40 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:19.769 17:29:41 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:19.769 17:29:41 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:19.769 17:29:41 -- json_config/json_config.sh@130 -- # kill -0 424996 00:04:19.769 17:29:41 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:19.769 17:29:41 -- json_config/json_config.sh@132 -- # break 00:04:19.769 17:29:41 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:19.769 17:29:41 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:19.769 SPDK target shutdown done 00:04:19.769 17:29:41 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:19.769 INFO: relaunching applications... 00:04:19.769 17:29:41 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.769 17:29:41 -- json_config/json_config.sh@98 -- # local app=target 00:04:19.769 17:29:41 -- json_config/json_config.sh@99 -- # shift 00:04:19.769 17:29:41 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:19.769 17:29:41 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:19.769 17:29:41 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:19.769 17:29:41 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.769 17:29:41 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.769 17:29:41 -- json_config/json_config.sh@111 -- # app_pid[$app]=426523 00:04:19.769 17:29:41 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:19.769 Waiting for target to run... 00:04:19.769 17:29:41 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.769 17:29:41 -- json_config/json_config.sh@114 -- # waitforlisten 426523 /var/tmp/spdk_tgt.sock 00:04:19.769 17:29:41 -- common/autotest_common.sh@819 -- # '[' -z 426523 ']' 00:04:19.769 17:29:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.769 17:29:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.769 17:29:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.769 17:29:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.769 17:29:41 -- common/autotest_common.sh@10 -- # set +x 00:04:19.769 [2024-07-24 17:29:41.219185] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:19.769 [2024-07-24 17:29:41.219239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426523 ] 00:04:19.769 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.377 [2024-07-24 17:29:41.649682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.377 [2024-07-24 17:29:41.737279] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:20.377 [2024-07-24 17:29:41.737377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.669 [2024-07-24 17:29:44.738881] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.669 [2024-07-24 17:29:44.771219] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.929 17:29:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:23.929 17:29:45 -- common/autotest_common.sh@852 -- # return 0 00:04:23.929 17:29:45 -- json_config/json_config.sh@115 -- # echo '' 00:04:23.929 00:04:23.929 17:29:45 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:23.929 17:29:45 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:23.929 INFO: Checking if target configuration is the same... 00:04:23.929 17:29:45 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.929 17:29:45 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:23.929 17:29:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.929 + '[' 2 -ne 2 ']' 00:04:23.929 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.929 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.929 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.929 +++ basename /dev/fd/62 00:04:23.929 ++ mktemp /tmp/62.XXX 00:04:23.929 + tmp_file_1=/tmp/62.QQa 00:04:23.929 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.929 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.929 + tmp_file_2=/tmp/spdk_tgt_config.json.Sw2 00:04:23.929 + ret=0 00:04:23.929 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.188 + diff -u /tmp/62.QQa /tmp/spdk_tgt_config.json.Sw2 00:04:24.188 + echo 'INFO: JSON config files are the same' 00:04:24.188 INFO: JSON config files are the same 00:04:24.188 + rm /tmp/62.QQa /tmp/spdk_tgt_config.json.Sw2 00:04:24.188 + exit 0 00:04:24.188 17:29:45 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:24.188 17:29:45 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:24.188 INFO: changing configuration and checking if this can be detected... 00:04:24.188 17:29:45 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.188 17:29:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.447 17:29:45 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.447 17:29:45 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:24.447 17:29:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.447 + '[' 2 -ne 2 ']' 00:04:24.447 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:24.447 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:24.447 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.447 +++ basename /dev/fd/62 00:04:24.447 ++ mktemp /tmp/62.XXX 00:04:24.447 + tmp_file_1=/tmp/62.Kgf 00:04:24.447 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.447 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.447 + tmp_file_2=/tmp/spdk_tgt_config.json.T84 00:04:24.447 + ret=0 00:04:24.447 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.706 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.706 + diff -u /tmp/62.Kgf /tmp/spdk_tgt_config.json.T84 00:04:24.706 + ret=1 00:04:24.706 + echo '=== Start of file: /tmp/62.Kgf ===' 00:04:24.706 + cat /tmp/62.Kgf 00:04:24.706 + echo '=== End of file: /tmp/62.Kgf ===' 00:04:24.706 + echo '' 00:04:24.706 + echo '=== Start of file: /tmp/spdk_tgt_config.json.T84 ===' 00:04:24.706 + cat /tmp/spdk_tgt_config.json.T84 00:04:24.706 + echo '=== End of file: /tmp/spdk_tgt_config.json.T84 ===' 00:04:24.706 + echo '' 00:04:24.706 + rm /tmp/62.Kgf /tmp/spdk_tgt_config.json.T84 00:04:24.706 + exit 1 00:04:24.706 17:29:46 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:24.706 INFO: configuration change detected. 00:04:24.706 17:29:46 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:24.706 17:29:46 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:24.706 17:29:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.706 17:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.706 17:29:46 -- json_config/json_config.sh@360 -- # local ret=0 00:04:24.706 17:29:46 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:24.706 17:29:46 -- json_config/json_config.sh@370 -- # [[ -n 426523 ]] 00:04:24.706 17:29:46 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:24.706 17:29:46 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:24.706 17:29:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.706 17:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.706 17:29:46 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:24.706 17:29:46 -- json_config/json_config.sh@246 -- # uname -s 00:04:24.706 17:29:46 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:24.706 17:29:46 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:24.706 17:29:46 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:24.706 17:29:46 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:24.706 17:29:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:24.706 17:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.706 17:29:46 -- json_config/json_config.sh@376 -- # killprocess 426523 00:04:24.706 17:29:46 -- common/autotest_common.sh@926 -- # '[' -z 426523 ']' 00:04:24.706 17:29:46 -- common/autotest_common.sh@930 -- # kill -0 426523 00:04:24.706 17:29:46 -- common/autotest_common.sh@931 -- # uname 00:04:24.706 17:29:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:24.706 17:29:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 426523 00:04:24.706 17:29:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:24.706 17:29:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:24.706 17:29:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 426523' 00:04:24.706 killing process with pid 426523 00:04:24.706 17:29:46 -- common/autotest_common.sh@945 -- # kill 426523 00:04:24.706 17:29:46 -- common/autotest_common.sh@950 -- # wait 426523 00:04:26.613 17:29:47 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.613 17:29:47 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:26.613 17:29:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:26.613 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.613 17:29:47 -- json_config/json_config.sh@381 -- # return 0 00:04:26.613 17:29:47 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:26.613 INFO: Success 00:04:26.613 00:04:26.613 real 0m15.002s 00:04:26.613 user 0m15.873s 00:04:26.613 sys 0m2.108s 00:04:26.613 17:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.613 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.613 ************************************ 00:04:26.613 END TEST json_config 00:04:26.613 ************************************ 00:04:26.613 17:29:47 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.613 17:29:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.613 17:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.613 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.613 ************************************ 00:04:26.613 START TEST json_config_extra_key 00:04:26.613 ************************************ 00:04:26.613 17:29:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.613 17:29:47 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.613 17:29:47 -- nvmf/common.sh@7 -- # uname -s 00:04:26.613 17:29:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.613 17:29:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.613 17:29:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.613 17:29:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.613 17:29:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.613 17:29:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.613 17:29:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.613 17:29:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.613 17:29:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.613 17:29:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.613 17:29:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.613 17:29:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.613 17:29:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.613 17:29:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.613 17:29:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.613 17:29:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.613 17:29:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.614 17:29:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.614 17:29:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.614 17:29:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.614 17:29:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.614 17:29:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.614 17:29:47 -- paths/export.sh@5 -- # export PATH 00:04:26.614 17:29:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.614 17:29:47 -- nvmf/common.sh@46 -- # : 0 00:04:26.614 17:29:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:26.614 17:29:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:26.614 17:29:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:26.614 17:29:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.614 17:29:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.614 17:29:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:26.614 17:29:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:26.614 17:29:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:26.614 INFO: launching applications... 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=427819 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:26.614 Waiting for target to run... 00:04:26.614 17:29:47 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 427819 /var/tmp/spdk_tgt.sock 00:04:26.614 17:29:47 -- common/autotest_common.sh@819 -- # '[' -z 427819 ']' 00:04:26.614 17:29:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.614 17:29:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:26.614 17:29:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.614 17:29:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:26.614 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.614 [2024-07-24 17:29:47.982489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:26.614 [2024-07-24 17:29:47.982540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427819 ] 00:04:26.614 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.874 [2024-07-24 17:29:48.248805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.874 [2024-07-24 17:29:48.315613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:26.874 [2024-07-24 17:29:48.315726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.443 17:29:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:27.443 17:29:48 -- common/autotest_common.sh@852 -- # return 0 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:27.443 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:27.443 INFO: shutting down applications... 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 427819 ]] 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 427819 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@50 -- # kill -0 427819 00:04:27.443 17:29:48 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@50 -- # kill -0 427819 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:27.703 SPDK target shutdown done 00:04:27.703 17:29:49 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:27.703 Success 00:04:27.703 00:04:27.703 real 0m1.425s 00:04:27.703 user 0m1.252s 00:04:27.703 sys 0m0.346s 00:04:27.703 17:29:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.703 17:29:49 -- common/autotest_common.sh@10 -- # set +x 00:04:27.703 ************************************ 00:04:27.703 END TEST json_config_extra_key 00:04:27.703 ************************************ 00:04:27.964 17:29:49 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.964 17:29:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.964 17:29:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.964 17:29:49 -- common/autotest_common.sh@10 -- # set +x 00:04:27.964 ************************************ 00:04:27.964 START TEST alias_rpc 00:04:27.964 ************************************ 00:04:27.964 17:29:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.964 * Looking for test storage... 00:04:27.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:27.964 17:29:49 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.964 17:29:49 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=428098 00:04:27.964 17:29:49 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.964 17:29:49 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 428098 00:04:27.964 17:29:49 -- common/autotest_common.sh@819 -- # '[' -z 428098 ']' 00:04:27.964 17:29:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.964 17:29:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:27.964 17:29:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.964 17:29:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:27.964 17:29:49 -- common/autotest_common.sh@10 -- # set +x 00:04:27.964 [2024-07-24 17:29:49.455176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:27.964 [2024-07-24 17:29:49.455233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428098 ] 00:04:27.964 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.964 [2024-07-24 17:29:49.508654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.224 [2024-07-24 17:29:49.588309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:28.224 [2024-07-24 17:29:49.588421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.791 17:29:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:28.791 17:29:50 -- common/autotest_common.sh@852 -- # return 0 00:04:28.791 17:29:50 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:29.051 17:29:50 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 428098 00:04:29.051 17:29:50 -- common/autotest_common.sh@926 -- # '[' -z 428098 ']' 00:04:29.051 17:29:50 -- common/autotest_common.sh@930 -- # kill -0 428098 00:04:29.051 17:29:50 -- common/autotest_common.sh@931 -- # uname 00:04:29.051 17:29:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:29.051 17:29:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 428098 00:04:29.051 17:29:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:29.051 17:29:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:29.051 17:29:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 428098' 00:04:29.051 killing process with pid 428098 00:04:29.051 17:29:50 -- common/autotest_common.sh@945 -- # kill 428098 00:04:29.051 17:29:50 -- common/autotest_common.sh@950 -- # wait 428098 00:04:29.311 00:04:29.311 real 0m1.455s 00:04:29.311 user 0m1.570s 00:04:29.311 sys 0m0.372s 00:04:29.311 17:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.311 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:29.311 ************************************ 00:04:29.311 END TEST alias_rpc 00:04:29.311 ************************************ 00:04:29.311 17:29:50 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:29.311 17:29:50 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.311 17:29:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.311 17:29:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.311 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:29.311 ************************************ 00:04:29.311 START TEST spdkcli_tcp 00:04:29.311 ************************************ 00:04:29.311 17:29:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.311 * Looking for test storage... 00:04:29.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:29.311 17:29:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:29.311 17:29:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.311 17:29:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.311 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.311 17:29:50 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=428390 00:04:29.571 17:29:50 -- spdkcli/tcp.sh@27 -- # waitforlisten 428390 00:04:29.571 17:29:50 -- common/autotest_common.sh@819 -- # '[' -z 428390 ']' 00:04:29.571 17:29:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.571 17:29:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:29.571 17:29:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.571 17:29:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:29.571 17:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:29.571 [2024-07-24 17:29:50.937976] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:29.571 [2024-07-24 17:29:50.938027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428390 ] 00:04:29.571 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.571 [2024-07-24 17:29:50.992392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.571 [2024-07-24 17:29:51.071680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:29.571 [2024-07-24 17:29:51.071866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.571 [2024-07-24 17:29:51.071869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.509 17:29:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:30.509 17:29:51 -- common/autotest_common.sh@852 -- # return 0 00:04:30.509 17:29:51 -- spdkcli/tcp.sh@31 -- # socat_pid=428498 00:04:30.509 17:29:51 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.509 17:29:51 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.509 [ 00:04:30.509 "bdev_malloc_delete", 00:04:30.509 "bdev_malloc_create", 00:04:30.509 "bdev_null_resize", 00:04:30.509 "bdev_null_delete", 00:04:30.509 "bdev_null_create", 00:04:30.509 "bdev_nvme_cuse_unregister", 00:04:30.509 "bdev_nvme_cuse_register", 00:04:30.509 "bdev_opal_new_user", 00:04:30.509 "bdev_opal_set_lock_state", 00:04:30.509 "bdev_opal_delete", 00:04:30.509 "bdev_opal_get_info", 00:04:30.509 "bdev_opal_create", 00:04:30.509 "bdev_nvme_opal_revert", 00:04:30.509 "bdev_nvme_opal_init", 00:04:30.509 "bdev_nvme_send_cmd", 00:04:30.509 "bdev_nvme_get_path_iostat", 00:04:30.509 "bdev_nvme_get_mdns_discovery_info", 00:04:30.509 "bdev_nvme_stop_mdns_discovery", 00:04:30.509 "bdev_nvme_start_mdns_discovery", 00:04:30.509 "bdev_nvme_set_multipath_policy", 00:04:30.509 "bdev_nvme_set_preferred_path", 00:04:30.509 "bdev_nvme_get_io_paths", 00:04:30.509 "bdev_nvme_remove_error_injection", 00:04:30.509 "bdev_nvme_add_error_injection", 00:04:30.509 "bdev_nvme_get_discovery_info", 00:04:30.509 "bdev_nvme_stop_discovery", 00:04:30.509 "bdev_nvme_start_discovery", 00:04:30.509 "bdev_nvme_get_controller_health_info", 00:04:30.509 "bdev_nvme_disable_controller", 00:04:30.509 "bdev_nvme_enable_controller", 00:04:30.509 "bdev_nvme_reset_controller", 00:04:30.509 "bdev_nvme_get_transport_statistics", 00:04:30.509 "bdev_nvme_apply_firmware", 00:04:30.509 "bdev_nvme_detach_controller", 00:04:30.509 "bdev_nvme_get_controllers", 00:04:30.509 "bdev_nvme_attach_controller", 00:04:30.509 "bdev_nvme_set_hotplug", 00:04:30.509 "bdev_nvme_set_options", 00:04:30.509 "bdev_passthru_delete", 00:04:30.509 "bdev_passthru_create", 00:04:30.509 "bdev_lvol_grow_lvstore", 00:04:30.509 "bdev_lvol_get_lvols", 00:04:30.509 "bdev_lvol_get_lvstores", 00:04:30.509 "bdev_lvol_delete", 00:04:30.509 "bdev_lvol_set_read_only", 00:04:30.509 "bdev_lvol_resize", 00:04:30.509 "bdev_lvol_decouple_parent", 00:04:30.509 "bdev_lvol_inflate", 00:04:30.509 "bdev_lvol_rename", 00:04:30.509 "bdev_lvol_clone_bdev", 00:04:30.509 "bdev_lvol_clone", 00:04:30.509 "bdev_lvol_snapshot", 00:04:30.509 "bdev_lvol_create", 00:04:30.509 "bdev_lvol_delete_lvstore", 00:04:30.509 "bdev_lvol_rename_lvstore", 00:04:30.509 "bdev_lvol_create_lvstore", 00:04:30.509 "bdev_raid_set_options", 00:04:30.509 "bdev_raid_remove_base_bdev", 00:04:30.509 "bdev_raid_add_base_bdev", 00:04:30.509 "bdev_raid_delete", 00:04:30.509 "bdev_raid_create", 00:04:30.509 "bdev_raid_get_bdevs", 00:04:30.509 "bdev_error_inject_error", 00:04:30.509 "bdev_error_delete", 00:04:30.509 "bdev_error_create", 00:04:30.509 "bdev_split_delete", 00:04:30.509 "bdev_split_create", 00:04:30.509 "bdev_delay_delete", 00:04:30.509 "bdev_delay_create", 00:04:30.509 "bdev_delay_update_latency", 00:04:30.509 "bdev_zone_block_delete", 00:04:30.509 "bdev_zone_block_create", 00:04:30.509 "blobfs_create", 00:04:30.509 "blobfs_detect", 00:04:30.509 "blobfs_set_cache_size", 00:04:30.509 "bdev_aio_delete", 00:04:30.509 "bdev_aio_rescan", 00:04:30.509 "bdev_aio_create", 00:04:30.509 "bdev_ftl_set_property", 00:04:30.509 "bdev_ftl_get_properties", 00:04:30.509 "bdev_ftl_get_stats", 00:04:30.509 "bdev_ftl_unmap", 00:04:30.509 "bdev_ftl_unload", 00:04:30.509 "bdev_ftl_delete", 00:04:30.509 "bdev_ftl_load", 00:04:30.509 "bdev_ftl_create", 00:04:30.509 "bdev_virtio_attach_controller", 00:04:30.509 "bdev_virtio_scsi_get_devices", 00:04:30.509 "bdev_virtio_detach_controller", 00:04:30.509 "bdev_virtio_blk_set_hotplug", 00:04:30.509 "bdev_iscsi_delete", 00:04:30.509 "bdev_iscsi_create", 00:04:30.509 "bdev_iscsi_set_options", 00:04:30.509 "accel_error_inject_error", 00:04:30.509 "ioat_scan_accel_module", 00:04:30.509 "dsa_scan_accel_module", 00:04:30.509 "iaa_scan_accel_module", 00:04:30.509 "iscsi_set_options", 00:04:30.509 "iscsi_get_auth_groups", 00:04:30.509 "iscsi_auth_group_remove_secret", 00:04:30.509 "iscsi_auth_group_add_secret", 00:04:30.509 "iscsi_delete_auth_group", 00:04:30.509 "iscsi_create_auth_group", 00:04:30.509 "iscsi_set_discovery_auth", 00:04:30.509 "iscsi_get_options", 00:04:30.509 "iscsi_target_node_request_logout", 00:04:30.509 "iscsi_target_node_set_redirect", 00:04:30.509 "iscsi_target_node_set_auth", 00:04:30.509 "iscsi_target_node_add_lun", 00:04:30.509 "iscsi_get_connections", 00:04:30.509 "iscsi_portal_group_set_auth", 00:04:30.509 "iscsi_start_portal_group", 00:04:30.509 "iscsi_delete_portal_group", 00:04:30.509 "iscsi_create_portal_group", 00:04:30.509 "iscsi_get_portal_groups", 00:04:30.509 "iscsi_delete_target_node", 00:04:30.509 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.509 "iscsi_target_node_add_pg_ig_maps", 00:04:30.509 "iscsi_create_target_node", 00:04:30.509 "iscsi_get_target_nodes", 00:04:30.509 "iscsi_delete_initiator_group", 00:04:30.509 "iscsi_initiator_group_remove_initiators", 00:04:30.509 "iscsi_initiator_group_add_initiators", 00:04:30.509 "iscsi_create_initiator_group", 00:04:30.509 "iscsi_get_initiator_groups", 00:04:30.509 "nvmf_set_crdt", 00:04:30.509 "nvmf_set_config", 00:04:30.509 "nvmf_set_max_subsystems", 00:04:30.509 "nvmf_subsystem_get_listeners", 00:04:30.509 "nvmf_subsystem_get_qpairs", 00:04:30.509 "nvmf_subsystem_get_controllers", 00:04:30.509 "nvmf_get_stats", 00:04:30.509 "nvmf_get_transports", 00:04:30.509 "nvmf_create_transport", 00:04:30.509 "nvmf_get_targets", 00:04:30.509 "nvmf_delete_target", 00:04:30.509 "nvmf_create_target", 00:04:30.509 "nvmf_subsystem_allow_any_host", 00:04:30.509 "nvmf_subsystem_remove_host", 00:04:30.509 "nvmf_subsystem_add_host", 00:04:30.509 "nvmf_subsystem_remove_ns", 00:04:30.509 "nvmf_subsystem_add_ns", 00:04:30.509 "nvmf_subsystem_listener_set_ana_state", 00:04:30.510 "nvmf_discovery_get_referrals", 00:04:30.510 "nvmf_discovery_remove_referral", 00:04:30.510 "nvmf_discovery_add_referral", 00:04:30.510 "nvmf_subsystem_remove_listener", 00:04:30.510 "nvmf_subsystem_add_listener", 00:04:30.510 "nvmf_delete_subsystem", 00:04:30.510 "nvmf_create_subsystem", 00:04:30.510 "nvmf_get_subsystems", 00:04:30.510 "env_dpdk_get_mem_stats", 00:04:30.510 "nbd_get_disks", 00:04:30.510 "nbd_stop_disk", 00:04:30.510 "nbd_start_disk", 00:04:30.510 "ublk_recover_disk", 00:04:30.510 "ublk_get_disks", 00:04:30.510 "ublk_stop_disk", 00:04:30.510 "ublk_start_disk", 00:04:30.510 "ublk_destroy_target", 00:04:30.510 "ublk_create_target", 00:04:30.510 "virtio_blk_create_transport", 00:04:30.510 "virtio_blk_get_transports", 00:04:30.510 "vhost_controller_set_coalescing", 00:04:30.510 "vhost_get_controllers", 00:04:30.510 "vhost_delete_controller", 00:04:30.510 "vhost_create_blk_controller", 00:04:30.510 "vhost_scsi_controller_remove_target", 00:04:30.510 "vhost_scsi_controller_add_target", 00:04:30.510 "vhost_start_scsi_controller", 00:04:30.510 "vhost_create_scsi_controller", 00:04:30.510 "thread_set_cpumask", 00:04:30.510 "framework_get_scheduler", 00:04:30.510 "framework_set_scheduler", 00:04:30.510 "framework_get_reactors", 00:04:30.510 "thread_get_io_channels", 00:04:30.510 "thread_get_pollers", 00:04:30.510 "thread_get_stats", 00:04:30.510 "framework_monitor_context_switch", 00:04:30.510 "spdk_kill_instance", 00:04:30.510 "log_enable_timestamps", 00:04:30.510 "log_get_flags", 00:04:30.510 "log_clear_flag", 00:04:30.510 "log_set_flag", 00:04:30.510 "log_get_level", 00:04:30.510 "log_set_level", 00:04:30.510 "log_get_print_level", 00:04:30.510 "log_set_print_level", 00:04:30.510 "framework_enable_cpumask_locks", 00:04:30.510 "framework_disable_cpumask_locks", 00:04:30.510 "framework_wait_init", 00:04:30.510 "framework_start_init", 00:04:30.510 "scsi_get_devices", 00:04:30.510 "bdev_get_histogram", 00:04:30.510 "bdev_enable_histogram", 00:04:30.510 "bdev_set_qos_limit", 00:04:30.510 "bdev_set_qd_sampling_period", 00:04:30.510 "bdev_get_bdevs", 00:04:30.510 "bdev_reset_iostat", 00:04:30.510 "bdev_get_iostat", 00:04:30.510 "bdev_examine", 00:04:30.510 "bdev_wait_for_examine", 00:04:30.510 "bdev_set_options", 00:04:30.510 "notify_get_notifications", 00:04:30.510 "notify_get_types", 00:04:30.510 "accel_get_stats", 00:04:30.510 "accel_set_options", 00:04:30.510 "accel_set_driver", 00:04:30.510 "accel_crypto_key_destroy", 00:04:30.510 "accel_crypto_keys_get", 00:04:30.510 "accel_crypto_key_create", 00:04:30.510 "accel_assign_opc", 00:04:30.510 "accel_get_module_info", 00:04:30.510 "accel_get_opc_assignments", 00:04:30.510 "vmd_rescan", 00:04:30.510 "vmd_remove_device", 00:04:30.510 "vmd_enable", 00:04:30.510 "sock_set_default_impl", 00:04:30.510 "sock_impl_set_options", 00:04:30.510 "sock_impl_get_options", 00:04:30.510 "iobuf_get_stats", 00:04:30.510 "iobuf_set_options", 00:04:30.510 "framework_get_pci_devices", 00:04:30.510 "framework_get_config", 00:04:30.510 "framework_get_subsystems", 00:04:30.510 "trace_get_info", 00:04:30.510 "trace_get_tpoint_group_mask", 00:04:30.510 "trace_disable_tpoint_group", 00:04:30.510 "trace_enable_tpoint_group", 00:04:30.510 "trace_clear_tpoint_mask", 00:04:30.510 "trace_set_tpoint_mask", 00:04:30.510 "spdk_get_version", 00:04:30.510 "rpc_get_methods" 00:04:30.510 ] 00:04:30.510 17:29:51 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.510 17:29:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.510 17:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:30.510 17:29:51 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.510 17:29:51 -- spdkcli/tcp.sh@38 -- # killprocess 428390 00:04:30.510 17:29:51 -- common/autotest_common.sh@926 -- # '[' -z 428390 ']' 00:04:30.510 17:29:51 -- common/autotest_common.sh@930 -- # kill -0 428390 00:04:30.510 17:29:51 -- common/autotest_common.sh@931 -- # uname 00:04:30.510 17:29:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:30.510 17:29:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 428390 00:04:30.510 17:29:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:30.510 17:29:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:30.510 17:29:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 428390' 00:04:30.510 killing process with pid 428390 00:04:30.510 17:29:52 -- common/autotest_common.sh@945 -- # kill 428390 00:04:30.510 17:29:52 -- common/autotest_common.sh@950 -- # wait 428390 00:04:30.770 00:04:30.770 real 0m1.522s 00:04:30.770 user 0m2.896s 00:04:30.770 sys 0m0.389s 00:04:30.770 17:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.770 17:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.770 ************************************ 00:04:30.770 END TEST spdkcli_tcp 00:04:30.770 ************************************ 00:04:31.029 17:29:52 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.029 17:29:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.029 17:29:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.029 17:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:31.029 ************************************ 00:04:31.029 START TEST dpdk_mem_utility 00:04:31.029 ************************************ 00:04:31.029 17:29:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.029 * Looking for test storage... 00:04:31.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:31.029 17:29:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.029 17:29:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.029 17:29:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=428693 00:04:31.029 17:29:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 428693 00:04:31.029 17:29:52 -- common/autotest_common.sh@819 -- # '[' -z 428693 ']' 00:04:31.029 17:29:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.029 17:29:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:31.029 17:29:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.029 17:29:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:31.029 17:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:31.029 [2024-07-24 17:29:52.502959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:31.029 [2024-07-24 17:29:52.503013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428693 ] 00:04:31.029 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.029 [2024-07-24 17:29:52.556347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.288 [2024-07-24 17:29:52.633293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:31.288 [2024-07-24 17:29:52.633424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.864 17:29:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.864 17:29:53 -- common/autotest_common.sh@852 -- # return 0 00:04:31.864 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.864 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.864 17:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.864 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.864 { 00:04:31.864 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.864 } 00:04:31.864 17:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.864 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.864 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:31.864 1 heaps totaling size 814.000000 MiB 00:04:31.864 size: 814.000000 MiB heap id: 0 00:04:31.864 end heaps---------- 00:04:31.864 8 mempools totaling size 598.116089 MiB 00:04:31.864 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.864 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.864 size: 84.521057 MiB name: bdev_io_428693 00:04:31.864 size: 51.011292 MiB name: evtpool_428693 00:04:31.864 size: 50.003479 MiB name: msgpool_428693 00:04:31.864 size: 21.763794 MiB name: PDU_Pool 00:04:31.864 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.864 size: 0.026123 MiB name: Session_Pool 00:04:31.864 end mempools------- 00:04:31.864 6 memzones totaling size 4.142822 MiB 00:04:31.864 size: 1.000366 MiB name: RG_ring_0_428693 00:04:31.864 size: 1.000366 MiB name: RG_ring_1_428693 00:04:31.864 size: 1.000366 MiB name: RG_ring_4_428693 00:04:31.864 size: 1.000366 MiB name: RG_ring_5_428693 00:04:31.864 size: 0.125366 MiB name: RG_ring_2_428693 00:04:31.864 size: 0.015991 MiB name: RG_ring_3_428693 00:04:31.864 end memzones------- 00:04:31.864 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.864 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:31.864 list of free elements. size: 12.519348 MiB 00:04:31.864 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:31.864 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:31.864 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:31.864 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:31.864 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:31.864 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:31.864 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:31.864 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:31.864 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:31.864 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:31.864 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:31.864 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:31.864 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:31.864 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:31.864 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:31.864 list of standard malloc elements. size: 199.218079 MiB 00:04:31.864 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:31.864 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:31.864 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.864 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:31.864 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:31.864 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.864 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:31.864 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.864 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:31.864 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:31.864 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:31.864 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:31.864 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:31.864 list of memzone associated elements. size: 602.262573 MiB 00:04:31.864 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:31.864 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.864 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:31.864 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.864 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:31.864 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_428693_0 00:04:31.864 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:31.864 associated memzone info: size: 48.002930 MiB name: MP_evtpool_428693_0 00:04:31.864 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:31.864 associated memzone info: size: 48.002930 MiB name: MP_msgpool_428693_0 00:04:31.864 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:31.864 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.864 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:31.864 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.864 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:31.864 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_428693 00:04:31.864 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:31.864 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_428693 00:04:31.864 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.864 associated memzone info: size: 1.007996 MiB name: MP_evtpool_428693 00:04:31.864 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:31.864 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.864 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:31.864 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.864 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:31.864 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.864 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:31.864 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.864 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:31.864 associated memzone info: size: 1.000366 MiB name: RG_ring_0_428693 00:04:31.864 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:31.864 associated memzone info: size: 1.000366 MiB name: RG_ring_1_428693 00:04:31.864 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:31.864 associated memzone info: size: 1.000366 MiB name: RG_ring_4_428693 00:04:31.864 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:31.865 associated memzone info: size: 1.000366 MiB name: RG_ring_5_428693 00:04:31.865 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:31.865 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_428693 00:04:31.865 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:31.865 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.865 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:31.865 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.865 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:31.865 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.865 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:31.865 associated memzone info: size: 0.125366 MiB name: RG_ring_2_428693 00:04:31.865 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:31.865 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.865 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:31.865 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.865 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:31.865 associated memzone info: size: 0.015991 MiB name: RG_ring_3_428693 00:04:31.865 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:31.865 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.865 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:31.865 associated memzone info: size: 0.000183 MiB name: MP_msgpool_428693 00:04:31.865 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:31.865 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_428693 00:04:31.865 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:31.865 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.865 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.865 17:29:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 428693 00:04:31.865 17:29:53 -- common/autotest_common.sh@926 -- # '[' -z 428693 ']' 00:04:31.865 17:29:53 -- common/autotest_common.sh@930 -- # kill -0 428693 00:04:31.865 17:29:53 -- common/autotest_common.sh@931 -- # uname 00:04:31.865 17:29:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:31.865 17:29:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 428693 00:04:31.865 17:29:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:31.865 17:29:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:31.865 17:29:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 428693' 00:04:31.865 killing process with pid 428693 00:04:31.865 17:29:53 -- common/autotest_common.sh@945 -- # kill 428693 00:04:31.865 17:29:53 -- common/autotest_common.sh@950 -- # wait 428693 00:04:32.441 00:04:32.441 real 0m1.400s 00:04:32.441 user 0m1.495s 00:04:32.441 sys 0m0.364s 00:04:32.441 17:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.441 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:32.441 ************************************ 00:04:32.441 END TEST dpdk_mem_utility 00:04:32.441 ************************************ 00:04:32.441 17:29:53 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.441 17:29:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.441 17:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.441 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:32.441 ************************************ 00:04:32.441 START TEST event 00:04:32.441 ************************************ 00:04:32.441 17:29:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.441 * Looking for test storage... 00:04:32.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:32.441 17:29:53 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:32.441 17:29:53 -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.441 17:29:53 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.441 17:29:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:32.441 17:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.441 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:32.441 ************************************ 00:04:32.441 START TEST event_perf 00:04:32.441 ************************************ 00:04:32.441 17:29:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.441 Running I/O for 1 seconds...[2024-07-24 17:29:53.929548] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:32.441 [2024-07-24 17:29:53.929607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428982 ] 00:04:32.441 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.441 [2024-07-24 17:29:53.977143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.700 [2024-07-24 17:29:54.049810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.700 [2024-07-24 17:29:54.049907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.700 [2024-07-24 17:29:54.049993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.700 [2024-07-24 17:29:54.049995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.638 Running I/O for 1 seconds... 00:04:33.638 lcore 0: 201766 00:04:33.638 lcore 1: 201765 00:04:33.638 lcore 2: 201764 00:04:33.638 lcore 3: 201765 00:04:33.638 done. 00:04:33.638 00:04:33.638 real 0m1.220s 00:04:33.638 user 0m4.153s 00:04:33.638 sys 0m0.064s 00:04:33.638 17:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.638 17:29:55 -- common/autotest_common.sh@10 -- # set +x 00:04:33.638 ************************************ 00:04:33.638 END TEST event_perf 00:04:33.638 ************************************ 00:04:33.638 17:29:55 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.638 17:29:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:33.638 17:29:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.638 17:29:55 -- common/autotest_common.sh@10 -- # set +x 00:04:33.638 ************************************ 00:04:33.638 START TEST event_reactor 00:04:33.638 ************************************ 00:04:33.638 17:29:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.638 [2024-07-24 17:29:55.195907] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:33.638 [2024-07-24 17:29:55.195986] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429233 ] 00:04:33.638 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.897 [2024-07-24 17:29:55.254338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.897 [2024-07-24 17:29:55.320443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.836 test_start 00:04:34.836 oneshot 00:04:34.836 tick 100 00:04:34.836 tick 100 00:04:34.836 tick 250 00:04:34.836 tick 100 00:04:34.836 tick 100 00:04:34.836 tick 250 00:04:34.836 tick 500 00:04:34.836 tick 100 00:04:34.836 tick 100 00:04:34.836 tick 100 00:04:34.836 tick 250 00:04:34.836 tick 100 00:04:34.836 tick 100 00:04:34.836 test_end 00:04:34.836 00:04:34.836 real 0m1.230s 00:04:34.836 user 0m1.156s 00:04:34.836 sys 0m0.069s 00:04:34.836 17:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.836 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:04:34.836 ************************************ 00:04:34.836 END TEST event_reactor 00:04:34.836 ************************************ 00:04:35.095 17:29:56 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.095 17:29:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:35.095 17:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.095 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:04:35.095 ************************************ 00:04:35.095 START TEST event_reactor_perf 00:04:35.095 ************************************ 00:04:35.095 17:29:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.095 [2024-07-24 17:29:56.451845] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:35.095 [2024-07-24 17:29:56.451904] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429483 ] 00:04:35.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.095 [2024-07-24 17:29:56.504145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.095 [2024-07-24 17:29:56.574165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.473 test_start 00:04:36.473 test_end 00:04:36.473 Performance: 503849 events per second 00:04:36.473 00:04:36.473 real 0m1.219s 00:04:36.473 user 0m1.148s 00:04:36.473 sys 0m0.067s 00:04:36.473 17:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.473 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:04:36.473 ************************************ 00:04:36.473 END TEST event_reactor_perf 00:04:36.473 ************************************ 00:04:36.473 17:29:57 -- event/event.sh@49 -- # uname -s 00:04:36.473 17:29:57 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.473 17:29:57 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.473 17:29:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.473 17:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.473 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:04:36.473 ************************************ 00:04:36.473 START TEST event_scheduler 00:04:36.473 ************************************ 00:04:36.473 17:29:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.473 * Looking for test storage... 00:04:36.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:36.473 17:29:57 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.473 17:29:57 -- scheduler/scheduler.sh@35 -- # scheduler_pid=429765 00:04:36.473 17:29:57 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.473 17:29:57 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.473 17:29:57 -- scheduler/scheduler.sh@37 -- # waitforlisten 429765 00:04:36.473 17:29:57 -- common/autotest_common.sh@819 -- # '[' -z 429765 ']' 00:04:36.473 17:29:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.473 17:29:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.473 17:29:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.473 17:29:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.473 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:04:36.473 [2024-07-24 17:29:57.809066] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:36.473 [2024-07-24 17:29:57.809116] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429765 ] 00:04:36.473 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.473 [2024-07-24 17:29:57.862865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.473 [2024-07-24 17:29:57.935735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.473 [2024-07-24 17:29:57.935822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.473 [2024-07-24 17:29:57.935909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.473 [2024-07-24 17:29:57.935910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.043 17:29:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:37.043 17:29:58 -- common/autotest_common.sh@852 -- # return 0 00:04:37.043 17:29:58 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.043 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.043 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.043 POWER: Env isn't set yet! 00:04:37.043 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:37.043 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.043 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.043 POWER: Attempting to initialise PSTAT power management... 00:04:37.363 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:37.363 POWER: Initialized successfully for lcore 0 power management 00:04:37.363 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:37.363 POWER: Initialized successfully for lcore 1 power management 00:04:37.363 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:37.363 POWER: Initialized successfully for lcore 2 power management 00:04:37.363 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:37.363 POWER: Initialized successfully for lcore 3 power management 00:04:37.363 [2024-07-24 17:29:58.674193] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.363 [2024-07-24 17:29:58.674207] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.363 [2024-07-24 17:29:58.674215] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 [2024-07-24 17:29:58.745020] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.363 17:29:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.363 17:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 ************************************ 00:04:37.363 START TEST scheduler_create_thread 00:04:37.363 ************************************ 00:04:37.363 17:29:58 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 2 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 3 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 4 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 5 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 6 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 7 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 8 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 9 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 10 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:37.363 17:29:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.363 17:29:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.363 17:29:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.363 17:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:38.300 17:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.300 17:29:59 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:38.300 17:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:38.300 17:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:39.680 17:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:39.680 17:30:01 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:39.680 17:30:01 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:39.680 17:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:39.680 17:30:01 -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 17:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:40.616 00:04:40.616 real 0m3.376s 00:04:40.616 user 0m0.022s 00:04:40.616 sys 0m0.007s 00:04:40.616 17:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.616 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 ************************************ 00:04:40.616 END TEST scheduler_create_thread 00:04:40.616 ************************************ 00:04:40.616 17:30:02 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.616 17:30:02 -- scheduler/scheduler.sh@46 -- # killprocess 429765 00:04:40.616 17:30:02 -- common/autotest_common.sh@926 -- # '[' -z 429765 ']' 00:04:40.616 17:30:02 -- common/autotest_common.sh@930 -- # kill -0 429765 00:04:40.616 17:30:02 -- common/autotest_common.sh@931 -- # uname 00:04:40.616 17:30:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:40.616 17:30:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 429765 00:04:40.616 17:30:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:40.616 17:30:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:40.616 17:30:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 429765' 00:04:40.616 killing process with pid 429765 00:04:40.616 17:30:02 -- common/autotest_common.sh@945 -- # kill 429765 00:04:40.616 17:30:02 -- common/autotest_common.sh@950 -- # wait 429765 00:04:41.184 [2024-07-24 17:30:02.509002] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.184 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:41.184 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:41.184 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:41.184 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:41.184 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:41.184 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:41.184 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:41.184 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:41.184 00:04:41.184 real 0m5.069s 00:04:41.184 user 0m10.510s 00:04:41.184 sys 0m0.335s 00:04:41.184 17:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.184 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:04:41.184 ************************************ 00:04:41.184 END TEST event_scheduler 00:04:41.184 ************************************ 00:04:41.443 17:30:02 -- event/event.sh@51 -- # modprobe -n nbd 00:04:41.443 17:30:02 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:41.443 17:30:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.443 17:30:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.443 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:04:41.443 ************************************ 00:04:41.443 START TEST app_repeat 00:04:41.443 ************************************ 00:04:41.443 17:30:02 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:41.443 17:30:02 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.443 17:30:02 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.443 17:30:02 -- event/event.sh@13 -- # local nbd_list 00:04:41.443 17:30:02 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.443 17:30:02 -- event/event.sh@14 -- # local bdev_list 00:04:41.443 17:30:02 -- event/event.sh@15 -- # local repeat_times=4 00:04:41.443 17:30:02 -- event/event.sh@17 -- # modprobe nbd 00:04:41.443 17:30:02 -- event/event.sh@19 -- # repeat_pid=430772 00:04:41.443 17:30:02 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.443 17:30:02 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 430772' 00:04:41.443 Process app_repeat pid: 430772 00:04:41.443 17:30:02 -- event/event.sh@23 -- # for i in {0..2} 00:04:41.443 17:30:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.443 spdk_app_start Round 0 00:04:41.443 17:30:02 -- event/event.sh@25 -- # waitforlisten 430772 /var/tmp/spdk-nbd.sock 00:04:41.443 17:30:02 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.443 17:30:02 -- common/autotest_common.sh@819 -- # '[' -z 430772 ']' 00:04:41.444 17:30:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.444 17:30:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.444 17:30:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.444 17:30:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.444 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:04:41.444 [2024-07-24 17:30:02.836375] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:41.444 [2024-07-24 17:30:02.836432] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430772 ] 00:04:41.444 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.444 [2024-07-24 17:30:02.891087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.444 [2024-07-24 17:30:02.969124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.444 [2024-07-24 17:30:02.969127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.381 17:30:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:42.381 17:30:03 -- common/autotest_common.sh@852 -- # return 0 00:04:42.381 17:30:03 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.381 Malloc0 00:04:42.381 17:30:03 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.641 Malloc1 00:04:42.641 17:30:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@12 -- # local i 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.641 /dev/nbd0 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.641 17:30:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:42.641 17:30:04 -- common/autotest_common.sh@857 -- # local i 00:04:42.641 17:30:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:42.641 17:30:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:42.641 17:30:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:42.641 17:30:04 -- common/autotest_common.sh@861 -- # break 00:04:42.641 17:30:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:42.641 17:30:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:42.641 17:30:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.641 1+0 records in 00:04:42.641 1+0 records out 00:04:42.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177439 s, 23.1 MB/s 00:04:42.641 17:30:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.641 17:30:04 -- common/autotest_common.sh@874 -- # size=4096 00:04:42.641 17:30:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.641 17:30:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:42.641 17:30:04 -- common/autotest_common.sh@877 -- # return 0 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.641 17:30:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.901 /dev/nbd1 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.901 17:30:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:42.901 17:30:04 -- common/autotest_common.sh@857 -- # local i 00:04:42.901 17:30:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:42.901 17:30:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:42.901 17:30:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:42.901 17:30:04 -- common/autotest_common.sh@861 -- # break 00:04:42.901 17:30:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:42.901 17:30:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:42.901 17:30:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.901 1+0 records in 00:04:42.901 1+0 records out 00:04:42.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237951 s, 17.2 MB/s 00:04:42.901 17:30:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.901 17:30:04 -- common/autotest_common.sh@874 -- # size=4096 00:04:42.901 17:30:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.901 17:30:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:42.901 17:30:04 -- common/autotest_common.sh@877 -- # return 0 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.901 17:30:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.161 { 00:04:43.161 "nbd_device": "/dev/nbd0", 00:04:43.161 "bdev_name": "Malloc0" 00:04:43.161 }, 00:04:43.161 { 00:04:43.161 "nbd_device": "/dev/nbd1", 00:04:43.161 "bdev_name": "Malloc1" 00:04:43.161 } 00:04:43.161 ]' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.161 { 00:04:43.161 "nbd_device": "/dev/nbd0", 00:04:43.161 "bdev_name": "Malloc0" 00:04:43.161 }, 00:04:43.161 { 00:04:43.161 "nbd_device": "/dev/nbd1", 00:04:43.161 "bdev_name": "Malloc1" 00:04:43.161 } 00:04:43.161 ]' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.161 /dev/nbd1' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.161 /dev/nbd1' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.161 256+0 records in 00:04:43.161 256+0 records out 00:04:43.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428761 s, 245 MB/s 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.161 256+0 records in 00:04:43.161 256+0 records out 00:04:43.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134672 s, 77.9 MB/s 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.161 256+0 records in 00:04:43.161 256+0 records out 00:04:43.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145811 s, 71.9 MB/s 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@51 -- # local i 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.161 17:30:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.420 17:30:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@41 -- # break 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.421 17:30:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@41 -- # break 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@65 -- # true 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.680 17:30:05 -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.680 17:30:05 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.940 17:30:05 -- event/event.sh@35 -- # sleep 3 00:04:44.199 [2024-07-24 17:30:05.648783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.199 [2024-07-24 17:30:05.713765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.199 [2024-07-24 17:30:05.713769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.199 [2024-07-24 17:30:05.755254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.199 [2024-07-24 17:30:05.755294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.493 17:30:08 -- event/event.sh@23 -- # for i in {0..2} 00:04:47.493 17:30:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:47.493 spdk_app_start Round 1 00:04:47.493 17:30:08 -- event/event.sh@25 -- # waitforlisten 430772 /var/tmp/spdk-nbd.sock 00:04:47.493 17:30:08 -- common/autotest_common.sh@819 -- # '[' -z 430772 ']' 00:04:47.493 17:30:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.493 17:30:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.493 17:30:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.493 17:30:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.493 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.493 17:30:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.493 17:30:08 -- common/autotest_common.sh@852 -- # return 0 00:04:47.493 17:30:08 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.493 Malloc0 00:04:47.493 17:30:08 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.493 Malloc1 00:04:47.493 17:30:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@12 -- # local i 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.493 17:30:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.753 /dev/nbd0 00:04:47.753 17:30:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.753 17:30:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.753 17:30:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:47.753 17:30:09 -- common/autotest_common.sh@857 -- # local i 00:04:47.753 17:30:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:47.753 17:30:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:47.753 17:30:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:47.753 17:30:09 -- common/autotest_common.sh@861 -- # break 00:04:47.753 17:30:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:47.753 17:30:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:47.753 17:30:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.753 1+0 records in 00:04:47.753 1+0 records out 00:04:47.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020726 s, 19.8 MB/s 00:04:47.753 17:30:09 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.753 17:30:09 -- common/autotest_common.sh@874 -- # size=4096 00:04:47.753 17:30:09 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.753 17:30:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:47.753 17:30:09 -- common/autotest_common.sh@877 -- # return 0 00:04:47.753 17:30:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.753 17:30:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.753 17:30:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.753 /dev/nbd1 00:04:48.012 17:30:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.012 17:30:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.012 17:30:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:48.012 17:30:09 -- common/autotest_common.sh@857 -- # local i 00:04:48.012 17:30:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:48.012 17:30:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:48.012 17:30:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:48.012 17:30:09 -- common/autotest_common.sh@861 -- # break 00:04:48.012 17:30:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:48.012 17:30:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:48.012 17:30:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.012 1+0 records in 00:04:48.012 1+0 records out 00:04:48.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000107014 s, 38.3 MB/s 00:04:48.012 17:30:09 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.012 17:30:09 -- common/autotest_common.sh@874 -- # size=4096 00:04:48.012 17:30:09 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.012 17:30:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:48.012 17:30:09 -- common/autotest_common.sh@877 -- # return 0 00:04:48.012 17:30:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.013 { 00:04:48.013 "nbd_device": "/dev/nbd0", 00:04:48.013 "bdev_name": "Malloc0" 00:04:48.013 }, 00:04:48.013 { 00:04:48.013 "nbd_device": "/dev/nbd1", 00:04:48.013 "bdev_name": "Malloc1" 00:04:48.013 } 00:04:48.013 ]' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.013 { 00:04:48.013 "nbd_device": "/dev/nbd0", 00:04:48.013 "bdev_name": "Malloc0" 00:04:48.013 }, 00:04:48.013 { 00:04:48.013 "nbd_device": "/dev/nbd1", 00:04:48.013 "bdev_name": "Malloc1" 00:04:48.013 } 00:04:48.013 ]' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.013 /dev/nbd1' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.013 /dev/nbd1' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.013 17:30:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.272 256+0 records in 00:04:48.272 256+0 records out 00:04:48.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105169 s, 99.7 MB/s 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.272 256+0 records in 00:04:48.272 256+0 records out 00:04:48.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130525 s, 80.3 MB/s 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.272 256+0 records in 00:04:48.272 256+0 records out 00:04:48.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140623 s, 74.6 MB/s 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@51 -- # local i 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@41 -- # break 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.272 17:30:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@41 -- # break 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.532 17:30:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@65 -- # true 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.792 17:30:10 -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.792 17:30:10 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.051 17:30:10 -- event/event.sh@35 -- # sleep 3 00:04:49.051 [2024-07-24 17:30:10.624603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.310 [2024-07-24 17:30:10.689991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.310 [2024-07-24 17:30:10.689994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.310 [2024-07-24 17:30:10.731126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.310 [2024-07-24 17:30:10.731168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.849 17:30:13 -- event/event.sh@23 -- # for i in {0..2} 00:04:51.849 17:30:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.849 spdk_app_start Round 2 00:04:51.849 17:30:13 -- event/event.sh@25 -- # waitforlisten 430772 /var/tmp/spdk-nbd.sock 00:04:51.849 17:30:13 -- common/autotest_common.sh@819 -- # '[' -z 430772 ']' 00:04:51.849 17:30:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.849 17:30:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.849 17:30:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.849 17:30:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.849 17:30:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 17:30:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.108 17:30:13 -- common/autotest_common.sh@852 -- # return 0 00:04:52.108 17:30:13 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.368 Malloc0 00:04:52.368 17:30:13 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.368 Malloc1 00:04:52.368 17:30:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@12 -- # local i 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.368 17:30:13 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.628 /dev/nbd0 00:04:52.628 17:30:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.628 17:30:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.628 17:30:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:52.628 17:30:14 -- common/autotest_common.sh@857 -- # local i 00:04:52.628 17:30:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:52.628 17:30:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:52.628 17:30:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:52.628 17:30:14 -- common/autotest_common.sh@861 -- # break 00:04:52.628 17:30:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:52.628 17:30:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:52.628 17:30:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.628 1+0 records in 00:04:52.628 1+0 records out 00:04:52.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184434 s, 22.2 MB/s 00:04:52.628 17:30:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.628 17:30:14 -- common/autotest_common.sh@874 -- # size=4096 00:04:52.628 17:30:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.628 17:30:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:52.628 17:30:14 -- common/autotest_common.sh@877 -- # return 0 00:04:52.628 17:30:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.628 17:30:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.628 17:30:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.889 /dev/nbd1 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.889 17:30:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:52.889 17:30:14 -- common/autotest_common.sh@857 -- # local i 00:04:52.889 17:30:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:52.889 17:30:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:52.889 17:30:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:52.889 17:30:14 -- common/autotest_common.sh@861 -- # break 00:04:52.889 17:30:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:52.889 17:30:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:52.889 17:30:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.889 1+0 records in 00:04:52.889 1+0 records out 00:04:52.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154586 s, 26.5 MB/s 00:04:52.889 17:30:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.889 17:30:14 -- common/autotest_common.sh@874 -- # size=4096 00:04:52.889 17:30:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.889 17:30:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:52.889 17:30:14 -- common/autotest_common.sh@877 -- # return 0 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.889 17:30:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.149 { 00:04:53.149 "nbd_device": "/dev/nbd0", 00:04:53.149 "bdev_name": "Malloc0" 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "nbd_device": "/dev/nbd1", 00:04:53.149 "bdev_name": "Malloc1" 00:04:53.149 } 00:04:53.149 ]' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.149 { 00:04:53.149 "nbd_device": "/dev/nbd0", 00:04:53.149 "bdev_name": "Malloc0" 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "nbd_device": "/dev/nbd1", 00:04:53.149 "bdev_name": "Malloc1" 00:04:53.149 } 00:04:53.149 ]' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.149 /dev/nbd1' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.149 /dev/nbd1' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.149 256+0 records in 00:04:53.149 256+0 records out 00:04:53.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102954 s, 102 MB/s 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.149 256+0 records in 00:04:53.149 256+0 records out 00:04:53.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137051 s, 76.5 MB/s 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.149 256+0 records in 00:04:53.149 256+0 records out 00:04:53.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149101 s, 70.3 MB/s 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.149 17:30:14 -- bdev/nbd_common.sh@51 -- # local i 00:04:53.150 17:30:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.150 17:30:14 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@41 -- # break 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@41 -- # break 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.409 17:30:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@65 -- # true 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.668 17:30:15 -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.668 17:30:15 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.928 17:30:15 -- event/event.sh@35 -- # sleep 3 00:04:54.187 [2024-07-24 17:30:15.610063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.187 [2024-07-24 17:30:15.673522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.187 [2024-07-24 17:30:15.673525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.187 [2024-07-24 17:30:15.714894] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.187 [2024-07-24 17:30:15.714946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.524 17:30:18 -- event/event.sh@38 -- # waitforlisten 430772 /var/tmp/spdk-nbd.sock 00:04:57.524 17:30:18 -- common/autotest_common.sh@819 -- # '[' -z 430772 ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.524 17:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.524 17:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.524 17:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.524 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.524 17:30:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.524 17:30:18 -- common/autotest_common.sh@852 -- # return 0 00:04:57.524 17:30:18 -- event/event.sh@39 -- # killprocess 430772 00:04:57.524 17:30:18 -- common/autotest_common.sh@926 -- # '[' -z 430772 ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@930 -- # kill -0 430772 00:04:57.524 17:30:18 -- common/autotest_common.sh@931 -- # uname 00:04:57.524 17:30:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 430772 00:04:57.524 17:30:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:57.524 17:30:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 430772' 00:04:57.524 killing process with pid 430772 00:04:57.524 17:30:18 -- common/autotest_common.sh@945 -- # kill 430772 00:04:57.524 17:30:18 -- common/autotest_common.sh@950 -- # wait 430772 00:04:57.524 spdk_app_start is called in Round 0. 00:04:57.524 Shutdown signal received, stop current app iteration 00:04:57.524 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:57.524 spdk_app_start is called in Round 1. 00:04:57.524 Shutdown signal received, stop current app iteration 00:04:57.524 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:57.524 spdk_app_start is called in Round 2. 00:04:57.524 Shutdown signal received, stop current app iteration 00:04:57.524 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:57.524 spdk_app_start is called in Round 3. 00:04:57.524 Shutdown signal received, stop current app iteration 00:04:57.524 17:30:18 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.524 17:30:18 -- event/event.sh@42 -- # return 0 00:04:57.524 00:04:57.524 real 0m15.992s 00:04:57.524 user 0m34.541s 00:04:57.524 sys 0m2.329s 00:04:57.524 17:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.524 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.524 ************************************ 00:04:57.524 END TEST app_repeat 00:04:57.524 ************************************ 00:04:57.524 17:30:18 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.524 17:30:18 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.524 17:30:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.524 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.524 ************************************ 00:04:57.524 START TEST cpu_locks 00:04:57.524 ************************************ 00:04:57.524 17:30:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.524 * Looking for test storage... 00:04:57.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:57.524 17:30:18 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.524 17:30:18 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.524 17:30:18 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.524 17:30:18 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.524 17:30:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.524 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.524 ************************************ 00:04:57.524 START TEST default_locks 00:04:57.524 ************************************ 00:04:57.524 17:30:18 -- common/autotest_common.sh@1104 -- # default_locks 00:04:57.524 17:30:18 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=434063 00:04:57.524 17:30:18 -- event/cpu_locks.sh@47 -- # waitforlisten 434063 00:04:57.524 17:30:18 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.524 17:30:18 -- common/autotest_common.sh@819 -- # '[' -z 434063 ']' 00:04:57.524 17:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.524 17:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.524 17:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.524 17:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.524 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:57.524 [2024-07-24 17:30:18.960804] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:57.524 [2024-07-24 17:30:18.960854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434063 ] 00:04:57.524 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.524 [2024-07-24 17:30:19.015736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.524 [2024-07-24 17:30:19.086327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.524 [2024-07-24 17:30:19.086472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.462 17:30:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.462 17:30:19 -- common/autotest_common.sh@852 -- # return 0 00:04:58.462 17:30:19 -- event/cpu_locks.sh@49 -- # locks_exist 434063 00:04:58.462 17:30:19 -- event/cpu_locks.sh@22 -- # lslocks -p 434063 00:04:58.462 17:30:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.723 lslocks: write error 00:04:58.723 17:30:20 -- event/cpu_locks.sh@50 -- # killprocess 434063 00:04:58.723 17:30:20 -- common/autotest_common.sh@926 -- # '[' -z 434063 ']' 00:04:58.723 17:30:20 -- common/autotest_common.sh@930 -- # kill -0 434063 00:04:58.723 17:30:20 -- common/autotest_common.sh@931 -- # uname 00:04:58.723 17:30:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.723 17:30:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434063 00:04:58.723 17:30:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.723 17:30:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.723 17:30:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434063' 00:04:58.723 killing process with pid 434063 00:04:58.723 17:30:20 -- common/autotest_common.sh@945 -- # kill 434063 00:04:58.723 17:30:20 -- common/autotest_common.sh@950 -- # wait 434063 00:04:58.984 17:30:20 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 434063 00:04:58.984 17:30:20 -- common/autotest_common.sh@640 -- # local es=0 00:04:58.984 17:30:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 434063 00:04:58.984 17:30:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:58.984 17:30:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:58.984 17:30:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:58.984 17:30:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:58.984 17:30:20 -- common/autotest_common.sh@643 -- # waitforlisten 434063 00:04:58.984 17:30:20 -- common/autotest_common.sh@819 -- # '[' -z 434063 ']' 00:04:58.984 17:30:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.984 17:30:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.984 17:30:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.984 17:30:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.984 17:30:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (434063) - No such process 00:04:58.984 ERROR: process (pid: 434063) is no longer running 00:04:58.984 17:30:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.984 17:30:20 -- common/autotest_common.sh@852 -- # return 1 00:04:58.984 17:30:20 -- common/autotest_common.sh@643 -- # es=1 00:04:58.984 17:30:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:58.984 17:30:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:58.984 17:30:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:58.984 17:30:20 -- event/cpu_locks.sh@54 -- # no_locks 00:04:58.984 17:30:20 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.984 17:30:20 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.984 17:30:20 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.984 00:04:58.984 real 0m1.578s 00:04:58.984 user 0m1.633s 00:04:58.984 sys 0m0.529s 00:04:58.984 17:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.984 17:30:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.984 ************************************ 00:04:58.984 END TEST default_locks 00:04:58.984 ************************************ 00:04:58.984 17:30:20 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:58.984 17:30:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.984 17:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.984 17:30:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.984 ************************************ 00:04:58.984 START TEST default_locks_via_rpc 00:04:58.984 ************************************ 00:04:58.984 17:30:20 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:58.984 17:30:20 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=434350 00:04:58.984 17:30:20 -- event/cpu_locks.sh@63 -- # waitforlisten 434350 00:04:58.984 17:30:20 -- common/autotest_common.sh@819 -- # '[' -z 434350 ']' 00:04:58.984 17:30:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.984 17:30:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.984 17:30:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.984 17:30:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.984 17:30:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.984 17:30:20 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.984 [2024-07-24 17:30:20.570931] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:58.984 [2024-07-24 17:30:20.570982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434350 ] 00:04:59.244 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.244 [2024-07-24 17:30:20.624179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.244 [2024-07-24 17:30:20.701965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.244 [2024-07-24 17:30:20.702089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.813 17:30:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.813 17:30:21 -- common/autotest_common.sh@852 -- # return 0 00:04:59.813 17:30:21 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:59.813 17:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.813 17:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:59.813 17:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.813 17:30:21 -- event/cpu_locks.sh@67 -- # no_locks 00:04:59.813 17:30:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.813 17:30:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.813 17:30:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.813 17:30:21 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.813 17:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.813 17:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:59.813 17:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.813 17:30:21 -- event/cpu_locks.sh@71 -- # locks_exist 434350 00:04:59.813 17:30:21 -- event/cpu_locks.sh@22 -- # lslocks -p 434350 00:04:59.813 17:30:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.073 17:30:21 -- event/cpu_locks.sh@73 -- # killprocess 434350 00:05:00.073 17:30:21 -- common/autotest_common.sh@926 -- # '[' -z 434350 ']' 00:05:00.073 17:30:21 -- common/autotest_common.sh@930 -- # kill -0 434350 00:05:00.073 17:30:21 -- common/autotest_common.sh@931 -- # uname 00:05:00.073 17:30:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:00.073 17:30:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434350 00:05:00.333 17:30:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:00.333 17:30:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:00.333 17:30:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434350' 00:05:00.333 killing process with pid 434350 00:05:00.333 17:30:21 -- common/autotest_common.sh@945 -- # kill 434350 00:05:00.333 17:30:21 -- common/autotest_common.sh@950 -- # wait 434350 00:05:00.593 00:05:00.593 real 0m1.489s 00:05:00.593 user 0m1.561s 00:05:00.593 sys 0m0.451s 00:05:00.593 17:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.593 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 END TEST default_locks_via_rpc 00:05:00.593 ************************************ 00:05:00.593 17:30:22 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:00.593 17:30:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.593 17:30:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.593 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 START TEST non_locking_app_on_locked_coremask 00:05:00.593 ************************************ 00:05:00.593 17:30:22 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:00.593 17:30:22 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=434702 00:05:00.593 17:30:22 -- event/cpu_locks.sh@81 -- # waitforlisten 434702 /var/tmp/spdk.sock 00:05:00.593 17:30:22 -- common/autotest_common.sh@819 -- # '[' -z 434702 ']' 00:05:00.593 17:30:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.593 17:30:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.593 17:30:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.593 17:30:22 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.593 17:30:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.593 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 [2024-07-24 17:30:22.097157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:00.593 [2024-07-24 17:30:22.097207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434702 ] 00:05:00.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.593 [2024-07-24 17:30:22.150424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.852 [2024-07-24 17:30:22.232360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.852 [2024-07-24 17:30:22.232475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.420 17:30:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.420 17:30:22 -- common/autotest_common.sh@852 -- # return 0 00:05:01.420 17:30:22 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=434827 00:05:01.420 17:30:22 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:01.420 17:30:22 -- event/cpu_locks.sh@85 -- # waitforlisten 434827 /var/tmp/spdk2.sock 00:05:01.420 17:30:22 -- common/autotest_common.sh@819 -- # '[' -z 434827 ']' 00:05:01.420 17:30:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.420 17:30:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.420 17:30:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.420 17:30:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.420 17:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:01.420 [2024-07-24 17:30:22.914077] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:01.420 [2024-07-24 17:30:22.914123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434827 ] 00:05:01.420 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.420 [2024-07-24 17:30:22.989217] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.420 [2024-07-24 17:30:22.989238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.681 [2024-07-24 17:30:23.133794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.681 [2024-07-24 17:30:23.133911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.250 17:30:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.250 17:30:23 -- common/autotest_common.sh@852 -- # return 0 00:05:02.250 17:30:23 -- event/cpu_locks.sh@87 -- # locks_exist 434702 00:05:02.250 17:30:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.250 17:30:23 -- event/cpu_locks.sh@22 -- # lslocks -p 434702 00:05:02.819 lslocks: write error 00:05:02.819 17:30:24 -- event/cpu_locks.sh@89 -- # killprocess 434702 00:05:02.819 17:30:24 -- common/autotest_common.sh@926 -- # '[' -z 434702 ']' 00:05:02.819 17:30:24 -- common/autotest_common.sh@930 -- # kill -0 434702 00:05:02.819 17:30:24 -- common/autotest_common.sh@931 -- # uname 00:05:02.819 17:30:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.819 17:30:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434702 00:05:02.819 17:30:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:02.819 17:30:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:02.819 17:30:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434702' 00:05:02.819 killing process with pid 434702 00:05:02.819 17:30:24 -- common/autotest_common.sh@945 -- # kill 434702 00:05:02.819 17:30:24 -- common/autotest_common.sh@950 -- # wait 434702 00:05:03.390 17:30:24 -- event/cpu_locks.sh@90 -- # killprocess 434827 00:05:03.390 17:30:24 -- common/autotest_common.sh@926 -- # '[' -z 434827 ']' 00:05:03.390 17:30:24 -- common/autotest_common.sh@930 -- # kill -0 434827 00:05:03.390 17:30:24 -- common/autotest_common.sh@931 -- # uname 00:05:03.390 17:30:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:03.390 17:30:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434827 00:05:03.390 17:30:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:03.390 17:30:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:03.390 17:30:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434827' 00:05:03.390 killing process with pid 434827 00:05:03.390 17:30:24 -- common/autotest_common.sh@945 -- # kill 434827 00:05:03.390 17:30:24 -- common/autotest_common.sh@950 -- # wait 434827 00:05:03.960 00:05:03.960 real 0m3.250s 00:05:03.960 user 0m3.471s 00:05:03.960 sys 0m0.883s 00:05:03.960 17:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.960 17:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:03.960 ************************************ 00:05:03.960 END TEST non_locking_app_on_locked_coremask 00:05:03.960 ************************************ 00:05:03.960 17:30:25 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.960 17:30:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.960 17:30:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.960 17:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:03.960 ************************************ 00:05:03.960 START TEST locking_app_on_unlocked_coremask 00:05:03.960 ************************************ 00:05:03.960 17:30:25 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:03.960 17:30:25 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=435327 00:05:03.960 17:30:25 -- event/cpu_locks.sh@99 -- # waitforlisten 435327 /var/tmp/spdk.sock 00:05:03.960 17:30:25 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.960 17:30:25 -- common/autotest_common.sh@819 -- # '[' -z 435327 ']' 00:05:03.960 17:30:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.960 17:30:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.960 17:30:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.960 17:30:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.960 17:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:03.960 [2024-07-24 17:30:25.388890] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:03.960 [2024-07-24 17:30:25.388941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435327 ] 00:05:03.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.960 [2024-07-24 17:30:25.442192] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.960 [2024-07-24 17:30:25.442222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.960 [2024-07-24 17:30:25.508918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:03.960 [2024-07-24 17:30:25.509061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.903 17:30:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.903 17:30:26 -- common/autotest_common.sh@852 -- # return 0 00:05:04.903 17:30:26 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=435454 00:05:04.903 17:30:26 -- event/cpu_locks.sh@103 -- # waitforlisten 435454 /var/tmp/spdk2.sock 00:05:04.903 17:30:26 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.903 17:30:26 -- common/autotest_common.sh@819 -- # '[' -z 435454 ']' 00:05:04.903 17:30:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.903 17:30:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.903 17:30:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.903 17:30:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.903 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:04.903 [2024-07-24 17:30:26.218819] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:04.903 [2024-07-24 17:30:26.218866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435454 ] 00:05:04.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.903 [2024-07-24 17:30:26.294442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.903 [2024-07-24 17:30:26.437041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.903 [2024-07-24 17:30:26.437189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.474 17:30:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.474 17:30:27 -- common/autotest_common.sh@852 -- # return 0 00:05:05.474 17:30:27 -- event/cpu_locks.sh@105 -- # locks_exist 435454 00:05:05.474 17:30:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.474 17:30:27 -- event/cpu_locks.sh@22 -- # lslocks -p 435454 00:05:06.045 lslocks: write error 00:05:06.045 17:30:27 -- event/cpu_locks.sh@107 -- # killprocess 435327 00:05:06.045 17:30:27 -- common/autotest_common.sh@926 -- # '[' -z 435327 ']' 00:05:06.045 17:30:27 -- common/autotest_common.sh@930 -- # kill -0 435327 00:05:06.045 17:30:27 -- common/autotest_common.sh@931 -- # uname 00:05:06.045 17:30:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.045 17:30:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 435327 00:05:06.045 17:30:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.045 17:30:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.045 17:30:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 435327' 00:05:06.045 killing process with pid 435327 00:05:06.045 17:30:27 -- common/autotest_common.sh@945 -- # kill 435327 00:05:06.045 17:30:27 -- common/autotest_common.sh@950 -- # wait 435327 00:05:06.984 17:30:28 -- event/cpu_locks.sh@108 -- # killprocess 435454 00:05:06.984 17:30:28 -- common/autotest_common.sh@926 -- # '[' -z 435454 ']' 00:05:06.984 17:30:28 -- common/autotest_common.sh@930 -- # kill -0 435454 00:05:06.984 17:30:28 -- common/autotest_common.sh@931 -- # uname 00:05:06.984 17:30:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.984 17:30:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 435454 00:05:06.984 17:30:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.984 17:30:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.984 17:30:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 435454' 00:05:06.984 killing process with pid 435454 00:05:06.984 17:30:28 -- common/autotest_common.sh@945 -- # kill 435454 00:05:06.984 17:30:28 -- common/autotest_common.sh@950 -- # wait 435454 00:05:07.244 00:05:07.244 real 0m3.297s 00:05:07.244 user 0m3.494s 00:05:07.244 sys 0m0.947s 00:05:07.244 17:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.244 17:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.244 ************************************ 00:05:07.244 END TEST locking_app_on_unlocked_coremask 00:05:07.244 ************************************ 00:05:07.244 17:30:28 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.244 17:30:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.244 17:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.244 17:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.244 ************************************ 00:05:07.244 START TEST locking_app_on_locked_coremask 00:05:07.244 ************************************ 00:05:07.244 17:30:28 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:07.244 17:30:28 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=435842 00:05:07.244 17:30:28 -- event/cpu_locks.sh@116 -- # waitforlisten 435842 /var/tmp/spdk.sock 00:05:07.244 17:30:28 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.244 17:30:28 -- common/autotest_common.sh@819 -- # '[' -z 435842 ']' 00:05:07.244 17:30:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.244 17:30:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:07.244 17:30:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.244 17:30:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:07.244 17:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.244 [2024-07-24 17:30:28.726075] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:07.244 [2024-07-24 17:30:28.726126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435842 ] 00:05:07.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.244 [2024-07-24 17:30:28.781278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.503 [2024-07-24 17:30:28.850307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.503 [2024-07-24 17:30:28.850431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.071 17:30:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.071 17:30:29 -- common/autotest_common.sh@852 -- # return 0 00:05:08.071 17:30:29 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=436075 00:05:08.071 17:30:29 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 436075 /var/tmp/spdk2.sock 00:05:08.071 17:30:29 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.071 17:30:29 -- common/autotest_common.sh@640 -- # local es=0 00:05:08.071 17:30:29 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 436075 /var/tmp/spdk2.sock 00:05:08.071 17:30:29 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:08.071 17:30:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.071 17:30:29 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:08.071 17:30:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.071 17:30:29 -- common/autotest_common.sh@643 -- # waitforlisten 436075 /var/tmp/spdk2.sock 00:05:08.071 17:30:29 -- common/autotest_common.sh@819 -- # '[' -z 436075 ']' 00:05:08.071 17:30:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.071 17:30:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.071 17:30:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.071 17:30:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.071 17:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:08.071 [2024-07-24 17:30:29.567007] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:08.072 [2024-07-24 17:30:29.567059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436075 ] 00:05:08.072 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.072 [2024-07-24 17:30:29.640981] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 435842 has claimed it. 00:05:08.072 [2024-07-24 17:30:29.641021] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (436075) - No such process 00:05:08.640 ERROR: process (pid: 436075) is no longer running 00:05:08.640 17:30:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.640 17:30:30 -- common/autotest_common.sh@852 -- # return 1 00:05:08.640 17:30:30 -- common/autotest_common.sh@643 -- # es=1 00:05:08.640 17:30:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:08.640 17:30:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:08.640 17:30:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:08.640 17:30:30 -- event/cpu_locks.sh@122 -- # locks_exist 435842 00:05:08.640 17:30:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.640 17:30:30 -- event/cpu_locks.sh@22 -- # lslocks -p 435842 00:05:09.208 lslocks: write error 00:05:09.208 17:30:30 -- event/cpu_locks.sh@124 -- # killprocess 435842 00:05:09.208 17:30:30 -- common/autotest_common.sh@926 -- # '[' -z 435842 ']' 00:05:09.208 17:30:30 -- common/autotest_common.sh@930 -- # kill -0 435842 00:05:09.208 17:30:30 -- common/autotest_common.sh@931 -- # uname 00:05:09.208 17:30:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.208 17:30:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 435842 00:05:09.208 17:30:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.208 17:30:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.208 17:30:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 435842' 00:05:09.208 killing process with pid 435842 00:05:09.208 17:30:30 -- common/autotest_common.sh@945 -- # kill 435842 00:05:09.208 17:30:30 -- common/autotest_common.sh@950 -- # wait 435842 00:05:09.467 00:05:09.467 real 0m2.307s 00:05:09.467 user 0m2.524s 00:05:09.467 sys 0m0.627s 00:05:09.468 17:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.468 17:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.468 ************************************ 00:05:09.468 END TEST locking_app_on_locked_coremask 00:05:09.468 ************************************ 00:05:09.468 17:30:31 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.468 17:30:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.468 17:30:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.468 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.468 ************************************ 00:05:09.468 START TEST locking_overlapped_coremask 00:05:09.468 ************************************ 00:05:09.468 17:30:31 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:09.468 17:30:31 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=436334 00:05:09.468 17:30:31 -- event/cpu_locks.sh@133 -- # waitforlisten 436334 /var/tmp/spdk.sock 00:05:09.468 17:30:31 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.468 17:30:31 -- common/autotest_common.sh@819 -- # '[' -z 436334 ']' 00:05:09.468 17:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.468 17:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.468 17:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.468 17:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.468 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.728 [2024-07-24 17:30:31.067838] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:09.728 [2024-07-24 17:30:31.067890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436334 ] 00:05:09.728 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.728 [2024-07-24 17:30:31.120645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.728 [2024-07-24 17:30:31.190311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.728 [2024-07-24 17:30:31.190450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.728 [2024-07-24 17:30:31.190465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.728 [2024-07-24 17:30:31.190467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.299 17:30:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.299 17:30:31 -- common/autotest_common.sh@852 -- # return 0 00:05:10.299 17:30:31 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=436571 00:05:10.299 17:30:31 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 436571 /var/tmp/spdk2.sock 00:05:10.299 17:30:31 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:10.299 17:30:31 -- common/autotest_common.sh@640 -- # local es=0 00:05:10.299 17:30:31 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 436571 /var/tmp/spdk2.sock 00:05:10.299 17:30:31 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:10.299 17:30:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.299 17:30:31 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:10.299 17:30:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.299 17:30:31 -- common/autotest_common.sh@643 -- # waitforlisten 436571 /var/tmp/spdk2.sock 00:05:10.299 17:30:31 -- common/autotest_common.sh@819 -- # '[' -z 436571 ']' 00:05:10.299 17:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.299 17:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.299 17:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.299 17:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.299 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.559 [2024-07-24 17:30:31.912095] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:10.559 [2024-07-24 17:30:31.912159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436571 ] 00:05:10.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.559 [2024-07-24 17:30:31.987442] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 436334 has claimed it. 00:05:10.559 [2024-07-24 17:30:31.987483] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (436571) - No such process 00:05:11.165 ERROR: process (pid: 436571) is no longer running 00:05:11.165 17:30:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.165 17:30:32 -- common/autotest_common.sh@852 -- # return 1 00:05:11.165 17:30:32 -- common/autotest_common.sh@643 -- # es=1 00:05:11.165 17:30:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:11.165 17:30:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:11.165 17:30:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:11.165 17:30:32 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:11.165 17:30:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.165 17:30:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.165 17:30:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.165 17:30:32 -- event/cpu_locks.sh@141 -- # killprocess 436334 00:05:11.165 17:30:32 -- common/autotest_common.sh@926 -- # '[' -z 436334 ']' 00:05:11.165 17:30:32 -- common/autotest_common.sh@930 -- # kill -0 436334 00:05:11.165 17:30:32 -- common/autotest_common.sh@931 -- # uname 00:05:11.165 17:30:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.165 17:30:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 436334 00:05:11.165 17:30:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.165 17:30:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.165 17:30:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 436334' 00:05:11.165 killing process with pid 436334 00:05:11.166 17:30:32 -- common/autotest_common.sh@945 -- # kill 436334 00:05:11.166 17:30:32 -- common/autotest_common.sh@950 -- # wait 436334 00:05:11.453 00:05:11.453 real 0m1.900s 00:05:11.453 user 0m5.344s 00:05:11.453 sys 0m0.392s 00:05:11.453 17:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.453 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.453 ************************************ 00:05:11.453 END TEST locking_overlapped_coremask 00:05:11.453 ************************************ 00:05:11.453 17:30:32 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.453 17:30:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.453 17:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.453 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.453 ************************************ 00:05:11.453 START TEST locking_overlapped_coremask_via_rpc 00:05:11.453 ************************************ 00:05:11.453 17:30:32 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:11.453 17:30:32 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=436662 00:05:11.453 17:30:32 -- event/cpu_locks.sh@149 -- # waitforlisten 436662 /var/tmp/spdk.sock 00:05:11.453 17:30:32 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.453 17:30:32 -- common/autotest_common.sh@819 -- # '[' -z 436662 ']' 00:05:11.453 17:30:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.453 17:30:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.453 17:30:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.453 17:30:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.453 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.453 [2024-07-24 17:30:33.009403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:11.453 [2024-07-24 17:30:33.009452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436662 ] 00:05:11.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.713 [2024-07-24 17:30:33.062446] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.713 [2024-07-24 17:30:33.062473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.713 [2024-07-24 17:30:33.140693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.713 [2024-07-24 17:30:33.140837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.713 [2024-07-24 17:30:33.140943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.713 [2024-07-24 17:30:33.140932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.281 17:30:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.281 17:30:33 -- common/autotest_common.sh@852 -- # return 0 00:05:12.281 17:30:33 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=436848 00:05:12.281 17:30:33 -- event/cpu_locks.sh@153 -- # waitforlisten 436848 /var/tmp/spdk2.sock 00:05:12.281 17:30:33 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.281 17:30:33 -- common/autotest_common.sh@819 -- # '[' -z 436848 ']' 00:05:12.281 17:30:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.281 17:30:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.281 17:30:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.281 17:30:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.281 17:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:12.281 [2024-07-24 17:30:33.857372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:12.281 [2024-07-24 17:30:33.857418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436848 ] 00:05:12.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.540 [2024-07-24 17:30:33.933078] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.540 [2024-07-24 17:30:33.933101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.540 [2024-07-24 17:30:34.077768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.540 [2024-07-24 17:30:34.077923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.540 [2024-07-24 17:30:34.081094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.540 [2024-07-24 17:30:34.081095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:13.107 17:30:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.107 17:30:34 -- common/autotest_common.sh@852 -- # return 0 00:05:13.107 17:30:34 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.107 17:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:13.107 17:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.107 17:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:13.107 17:30:34 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.107 17:30:34 -- common/autotest_common.sh@640 -- # local es=0 00:05:13.107 17:30:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.107 17:30:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:13.107 17:30:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:13.107 17:30:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:13.107 17:30:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:13.107 17:30:34 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.107 17:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:13.107 17:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.107 [2024-07-24 17:30:34.688110] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 436662 has claimed it. 00:05:13.107 request: 00:05:13.107 { 00:05:13.107 "method": "framework_enable_cpumask_locks", 00:05:13.107 "req_id": 1 00:05:13.107 } 00:05:13.107 Got JSON-RPC error response 00:05:13.107 response: 00:05:13.107 { 00:05:13.107 "code": -32603, 00:05:13.107 "message": "Failed to claim CPU core: 2" 00:05:13.107 } 00:05:13.107 17:30:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:13.107 17:30:34 -- common/autotest_common.sh@643 -- # es=1 00:05:13.107 17:30:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:13.107 17:30:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:13.107 17:30:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:13.107 17:30:34 -- event/cpu_locks.sh@158 -- # waitforlisten 436662 /var/tmp/spdk.sock 00:05:13.107 17:30:34 -- common/autotest_common.sh@819 -- # '[' -z 436662 ']' 00:05:13.107 17:30:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.107 17:30:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.107 17:30:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.107 17:30:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.107 17:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.366 17:30:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.366 17:30:34 -- common/autotest_common.sh@852 -- # return 0 00:05:13.366 17:30:34 -- event/cpu_locks.sh@159 -- # waitforlisten 436848 /var/tmp/spdk2.sock 00:05:13.366 17:30:34 -- common/autotest_common.sh@819 -- # '[' -z 436848 ']' 00:05:13.366 17:30:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.366 17:30:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.366 17:30:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.366 17:30:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.366 17:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:13.626 17:30:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.626 17:30:35 -- common/autotest_common.sh@852 -- # return 0 00:05:13.626 17:30:35 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.626 17:30:35 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.626 17:30:35 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.626 17:30:35 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.626 00:05:13.626 real 0m2.093s 00:05:13.626 user 0m0.844s 00:05:13.626 sys 0m0.176s 00:05:13.626 17:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.626 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.626 ************************************ 00:05:13.626 END TEST locking_overlapped_coremask_via_rpc 00:05:13.626 ************************************ 00:05:13.626 17:30:35 -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.626 17:30:35 -- event/cpu_locks.sh@15 -- # [[ -z 436662 ]] 00:05:13.626 17:30:35 -- event/cpu_locks.sh@15 -- # killprocess 436662 00:05:13.626 17:30:35 -- common/autotest_common.sh@926 -- # '[' -z 436662 ']' 00:05:13.626 17:30:35 -- common/autotest_common.sh@930 -- # kill -0 436662 00:05:13.626 17:30:35 -- common/autotest_common.sh@931 -- # uname 00:05:13.626 17:30:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:13.626 17:30:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 436662 00:05:13.626 17:30:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:13.626 17:30:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:13.626 17:30:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 436662' 00:05:13.626 killing process with pid 436662 00:05:13.626 17:30:35 -- common/autotest_common.sh@945 -- # kill 436662 00:05:13.626 17:30:35 -- common/autotest_common.sh@950 -- # wait 436662 00:05:14.195 17:30:35 -- event/cpu_locks.sh@16 -- # [[ -z 436848 ]] 00:05:14.195 17:30:35 -- event/cpu_locks.sh@16 -- # killprocess 436848 00:05:14.195 17:30:35 -- common/autotest_common.sh@926 -- # '[' -z 436848 ']' 00:05:14.195 17:30:35 -- common/autotest_common.sh@930 -- # kill -0 436848 00:05:14.195 17:30:35 -- common/autotest_common.sh@931 -- # uname 00:05:14.195 17:30:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.195 17:30:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 436848 00:05:14.195 17:30:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:14.195 17:30:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:14.195 17:30:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 436848' 00:05:14.195 killing process with pid 436848 00:05:14.195 17:30:35 -- common/autotest_common.sh@945 -- # kill 436848 00:05:14.195 17:30:35 -- common/autotest_common.sh@950 -- # wait 436848 00:05:14.455 17:30:35 -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.455 17:30:35 -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.455 17:30:35 -- event/cpu_locks.sh@15 -- # [[ -z 436662 ]] 00:05:14.455 17:30:35 -- event/cpu_locks.sh@15 -- # killprocess 436662 00:05:14.455 17:30:35 -- common/autotest_common.sh@926 -- # '[' -z 436662 ']' 00:05:14.455 17:30:35 -- common/autotest_common.sh@930 -- # kill -0 436662 00:05:14.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (436662) - No such process 00:05:14.455 17:30:35 -- common/autotest_common.sh@953 -- # echo 'Process with pid 436662 is not found' 00:05:14.455 Process with pid 436662 is not found 00:05:14.455 17:30:35 -- event/cpu_locks.sh@16 -- # [[ -z 436848 ]] 00:05:14.455 17:30:35 -- event/cpu_locks.sh@16 -- # killprocess 436848 00:05:14.455 17:30:35 -- common/autotest_common.sh@926 -- # '[' -z 436848 ']' 00:05:14.455 17:30:35 -- common/autotest_common.sh@930 -- # kill -0 436848 00:05:14.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (436848) - No such process 00:05:14.455 17:30:35 -- common/autotest_common.sh@953 -- # echo 'Process with pid 436848 is not found' 00:05:14.455 Process with pid 436848 is not found 00:05:14.455 17:30:35 -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.455 00:05:14.455 real 0m17.037s 00:05:14.455 user 0m29.439s 00:05:14.455 sys 0m4.781s 00:05:14.455 17:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.455 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.455 ************************************ 00:05:14.455 END TEST cpu_locks 00:05:14.455 ************************************ 00:05:14.455 00:05:14.455 real 0m42.077s 00:05:14.455 user 1m21.064s 00:05:14.455 sys 0m7.883s 00:05:14.455 17:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.455 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.455 ************************************ 00:05:14.455 END TEST event 00:05:14.455 ************************************ 00:05:14.455 17:30:35 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.455 17:30:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.455 17:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.455 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.455 ************************************ 00:05:14.455 START TEST thread 00:05:14.455 ************************************ 00:05:14.455 17:30:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.455 * Looking for test storage... 00:05:14.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:14.455 17:30:36 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.455 17:30:36 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:14.455 17:30:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.455 17:30:36 -- common/autotest_common.sh@10 -- # set +x 00:05:14.455 ************************************ 00:05:14.455 START TEST thread_poller_perf 00:05:14.455 ************************************ 00:05:14.455 17:30:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.714 [2024-07-24 17:30:36.056991] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:14.714 [2024-07-24 17:30:36.057093] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437402 ] 00:05:14.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.714 [2024-07-24 17:30:36.113243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.714 [2024-07-24 17:30:36.182664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.714 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.095 ====================================== 00:05:16.095 busy:2308651020 (cyc) 00:05:16.095 total_run_count: 389000 00:05:16.095 tsc_hz: 2300000000 (cyc) 00:05:16.095 ====================================== 00:05:16.095 poller_cost: 5934 (cyc), 2580 (nsec) 00:05:16.095 00:05:16.095 real 0m1.242s 00:05:16.095 user 0m1.163s 00:05:16.095 sys 0m0.074s 00:05:16.095 17:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.095 17:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.095 ************************************ 00:05:16.095 END TEST thread_poller_perf 00:05:16.095 ************************************ 00:05:16.095 17:30:37 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.095 17:30:37 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:16.095 17:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.095 17:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.095 ************************************ 00:05:16.095 START TEST thread_poller_perf 00:05:16.095 ************************************ 00:05:16.095 17:30:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.095 [2024-07-24 17:30:37.338947] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:16.095 [2024-07-24 17:30:37.339028] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437632 ] 00:05:16.095 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.095 [2024-07-24 17:30:37.397894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.095 [2024-07-24 17:30:37.466309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.095 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.033 ====================================== 00:05:17.033 busy:2302089566 (cyc) 00:05:17.033 total_run_count: 5447000 00:05:17.033 tsc_hz: 2300000000 (cyc) 00:05:17.033 ====================================== 00:05:17.033 poller_cost: 422 (cyc), 183 (nsec) 00:05:17.033 00:05:17.033 real 0m1.238s 00:05:17.033 user 0m1.164s 00:05:17.033 sys 0m0.069s 00:05:17.033 17:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.033 17:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:17.033 ************************************ 00:05:17.033 END TEST thread_poller_perf 00:05:17.033 ************************************ 00:05:17.033 17:30:38 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.033 00:05:17.033 real 0m2.637s 00:05:17.033 user 0m2.381s 00:05:17.033 sys 0m0.267s 00:05:17.033 17:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.033 17:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:17.033 ************************************ 00:05:17.033 END TEST thread 00:05:17.033 ************************************ 00:05:17.033 17:30:38 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:17.033 17:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.033 17:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.033 17:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:17.033 ************************************ 00:05:17.033 START TEST accel 00:05:17.033 ************************************ 00:05:17.033 17:30:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:17.292 * Looking for test storage... 00:05:17.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:17.292 17:30:38 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:17.292 17:30:38 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:17.292 17:30:38 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.292 17:30:38 -- accel/accel.sh@59 -- # spdk_tgt_pid=437943 00:05:17.292 17:30:38 -- accel/accel.sh@60 -- # waitforlisten 437943 00:05:17.292 17:30:38 -- common/autotest_common.sh@819 -- # '[' -z 437943 ']' 00:05:17.292 17:30:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.292 17:30:38 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:17.292 17:30:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.292 17:30:38 -- accel/accel.sh@58 -- # build_accel_config 00:05:17.292 17:30:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.292 17:30:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.292 17:30:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.292 17:30:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.292 17:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:17.292 17:30:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.292 17:30:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.292 17:30:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.292 17:30:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.292 17:30:38 -- accel/accel.sh@42 -- # jq -r . 00:05:17.292 [2024-07-24 17:30:38.762372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:17.292 [2024-07-24 17:30:38.762427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437943 ] 00:05:17.292 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.292 [2024-07-24 17:30:38.816371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.551 [2024-07-24 17:30:38.894036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.551 [2024-07-24 17:30:38.894178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.121 17:30:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.121 17:30:39 -- common/autotest_common.sh@852 -- # return 0 00:05:18.121 17:30:39 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:18.121 17:30:39 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:18.121 17:30:39 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:18.121 17:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.121 17:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.121 17:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.121 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.121 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.121 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.121 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.121 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.121 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.121 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # IFS== 00:05:18.122 17:30:39 -- accel/accel.sh@64 -- # read -r opc module 00:05:18.122 17:30:39 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:18.122 17:30:39 -- accel/accel.sh@67 -- # killprocess 437943 00:05:18.122 17:30:39 -- common/autotest_common.sh@926 -- # '[' -z 437943 ']' 00:05:18.122 17:30:39 -- common/autotest_common.sh@930 -- # kill -0 437943 00:05:18.122 17:30:39 -- common/autotest_common.sh@931 -- # uname 00:05:18.122 17:30:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.122 17:30:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 437943 00:05:18.122 17:30:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.122 17:30:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.122 17:30:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 437943' 00:05:18.122 killing process with pid 437943 00:05:18.122 17:30:39 -- common/autotest_common.sh@945 -- # kill 437943 00:05:18.122 17:30:39 -- common/autotest_common.sh@950 -- # wait 437943 00:05:18.381 17:30:39 -- accel/accel.sh@68 -- # trap - ERR 00:05:18.381 17:30:39 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:18.382 17:30:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:18.382 17:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.382 17:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.641 17:30:39 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:18.641 17:30:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:18.642 17:30:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.642 17:30:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.642 17:30:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.642 17:30:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.642 17:30:39 -- accel/accel.sh@42 -- # jq -r . 00:05:18.642 17:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.642 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.642 17:30:40 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:18.642 17:30:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:18.642 17:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.642 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.642 ************************************ 00:05:18.642 START TEST accel_missing_filename 00:05:18.642 ************************************ 00:05:18.642 17:30:40 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:18.642 17:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:18.642 17:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:18.642 17:30:40 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:18.642 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.642 17:30:40 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:18.642 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.642 17:30:40 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:18.642 17:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:18.642 17:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.642 17:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.642 17:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.642 17:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.642 17:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.642 17:30:40 -- accel/accel.sh@42 -- # jq -r . 00:05:18.642 [2024-07-24 17:30:40.075232] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:18.642 [2024-07-24 17:30:40.075312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438183 ] 00:05:18.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.642 [2024-07-24 17:30:40.131554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.642 [2024-07-24 17:30:40.200781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.901 [2024-07-24 17:30:40.241656] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.901 [2024-07-24 17:30:40.301605] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:18.901 A filename is required. 00:05:18.901 17:30:40 -- common/autotest_common.sh@643 -- # es=234 00:05:18.901 17:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:18.901 17:30:40 -- common/autotest_common.sh@652 -- # es=106 00:05:18.901 17:30:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:18.901 17:30:40 -- common/autotest_common.sh@660 -- # es=1 00:05:18.901 17:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:18.901 00:05:18.901 real 0m0.349s 00:05:18.901 user 0m0.279s 00:05:18.901 sys 0m0.110s 00:05:18.901 17:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.901 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.901 ************************************ 00:05:18.901 END TEST accel_missing_filename 00:05:18.901 ************************************ 00:05:18.901 17:30:40 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.901 17:30:40 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:18.901 17:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.901 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.901 ************************************ 00:05:18.901 START TEST accel_compress_verify 00:05:18.901 ************************************ 00:05:18.902 17:30:40 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.902 17:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:18.902 17:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.902 17:30:40 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:18.902 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.902 17:30:40 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:18.902 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.902 17:30:40 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.902 17:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.902 17:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.902 17:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.902 17:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.902 17:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.902 17:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.902 17:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.902 17:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.902 17:30:40 -- accel/accel.sh@42 -- # jq -r . 00:05:18.902 [2024-07-24 17:30:40.456146] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:18.902 [2024-07-24 17:30:40.456219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438240 ] 00:05:18.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.161 [2024-07-24 17:30:40.511844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.161 [2024-07-24 17:30:40.582939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.161 [2024-07-24 17:30:40.624117] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.161 [2024-07-24 17:30:40.684272] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:19.421 00:05:19.421 Compression does not support the verify option, aborting. 00:05:19.421 17:30:40 -- common/autotest_common.sh@643 -- # es=161 00:05:19.421 17:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:19.421 17:30:40 -- common/autotest_common.sh@652 -- # es=33 00:05:19.421 17:30:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:19.421 17:30:40 -- common/autotest_common.sh@660 -- # es=1 00:05:19.421 17:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:19.421 00:05:19.421 real 0m0.350s 00:05:19.421 user 0m0.272s 00:05:19.421 sys 0m0.117s 00:05:19.421 17:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.421 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.421 ************************************ 00:05:19.421 END TEST accel_compress_verify 00:05:19.421 ************************************ 00:05:19.421 17:30:40 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:19.421 17:30:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:19.421 17:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.421 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.421 ************************************ 00:05:19.421 START TEST accel_wrong_workload 00:05:19.421 ************************************ 00:05:19.421 17:30:40 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:19.421 17:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:19.421 17:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:19.421 17:30:40 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:19.421 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.421 17:30:40 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:19.421 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.421 17:30:40 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:19.421 17:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:19.421 17:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.421 17:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.421 17:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.421 17:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.421 17:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.421 17:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.421 17:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.421 17:30:40 -- accel/accel.sh@42 -- # jq -r . 00:05:19.422 Unsupported workload type: foobar 00:05:19.422 [2024-07-24 17:30:40.841594] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:19.422 accel_perf options: 00:05:19.422 [-h help message] 00:05:19.422 [-q queue depth per core] 00:05:19.422 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.422 [-T number of threads per core 00:05:19.422 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.422 [-t time in seconds] 00:05:19.422 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.422 [ dif_verify, , dif_generate, dif_generate_copy 00:05:19.422 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.422 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.422 [-S for crc32c workload, use this seed value (default 0) 00:05:19.422 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.422 [-f for fill workload, use this BYTE value (default 255) 00:05:19.422 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.422 [-y verify result if this switch is on] 00:05:19.422 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.422 Can be used to spread operations across a wider range of memory. 00:05:19.422 17:30:40 -- common/autotest_common.sh@643 -- # es=1 00:05:19.422 17:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:19.422 17:30:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:19.422 17:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:19.422 00:05:19.422 real 0m0.034s 00:05:19.422 user 0m0.023s 00:05:19.422 sys 0m0.011s 00:05:19.422 17:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.422 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.422 ************************************ 00:05:19.422 END TEST accel_wrong_workload 00:05:19.422 ************************************ 00:05:19.422 Error: writing output failed: Broken pipe 00:05:19.422 17:30:40 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.422 17:30:40 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:19.422 17:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.422 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.422 ************************************ 00:05:19.422 START TEST accel_negative_buffers 00:05:19.422 ************************************ 00:05:19.422 17:30:40 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.422 17:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:19.422 17:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:19.422 17:30:40 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:19.422 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.422 17:30:40 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:19.422 17:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:19.422 17:30:40 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:19.422 17:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:19.422 17:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.422 17:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.422 17:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.422 17:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.422 17:30:40 -- accel/accel.sh@42 -- # jq -r . 00:05:19.422 -x option must be non-negative. 00:05:19.422 [2024-07-24 17:30:40.903888] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:19.422 accel_perf options: 00:05:19.422 [-h help message] 00:05:19.422 [-q queue depth per core] 00:05:19.422 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.422 [-T number of threads per core 00:05:19.422 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.422 [-t time in seconds] 00:05:19.422 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.422 [ dif_verify, , dif_generate, dif_generate_copy 00:05:19.422 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.422 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.422 [-S for crc32c workload, use this seed value (default 0) 00:05:19.422 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.422 [-f for fill workload, use this BYTE value (default 255) 00:05:19.422 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.422 [-y verify result if this switch is on] 00:05:19.422 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.422 Can be used to spread operations across a wider range of memory. 00:05:19.422 17:30:40 -- common/autotest_common.sh@643 -- # es=1 00:05:19.422 17:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:19.422 17:30:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:19.422 17:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:19.422 00:05:19.422 real 0m0.028s 00:05:19.422 user 0m0.014s 00:05:19.422 sys 0m0.014s 00:05:19.422 17:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.422 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.422 ************************************ 00:05:19.422 END TEST accel_negative_buffers 00:05:19.422 ************************************ 00:05:19.422 Error: writing output failed: Broken pipe 00:05:19.422 17:30:40 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:19.422 17:30:40 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:19.422 17:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.422 17:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.422 ************************************ 00:05:19.422 START TEST accel_crc32c 00:05:19.422 ************************************ 00:05:19.422 17:30:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:19.422 17:30:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.422 17:30:40 -- accel/accel.sh@17 -- # local accel_module 00:05:19.422 17:30:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:19.422 17:30:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:19.422 17:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.422 17:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.422 17:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.422 17:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.422 17:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.422 17:30:40 -- accel/accel.sh@42 -- # jq -r . 00:05:19.422 [2024-07-24 17:30:40.972552] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:19.422 [2024-07-24 17:30:40.972618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438296 ] 00:05:19.422 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.681 [2024-07-24 17:30:41.031096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.681 [2024-07-24 17:30:41.104428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.061 17:30:42 -- accel/accel.sh@18 -- # out=' 00:05:21.061 SPDK Configuration: 00:05:21.061 Core mask: 0x1 00:05:21.061 00:05:21.061 Accel Perf Configuration: 00:05:21.061 Workload Type: crc32c 00:05:21.061 CRC-32C seed: 32 00:05:21.061 Transfer size: 4096 bytes 00:05:21.061 Vector count 1 00:05:21.061 Module: software 00:05:21.061 Queue depth: 32 00:05:21.061 Allocate depth: 32 00:05:21.061 # threads/core: 1 00:05:21.061 Run time: 1 seconds 00:05:21.061 Verify: Yes 00:05:21.061 00:05:21.061 Running for 1 seconds... 00:05:21.061 00:05:21.061 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.061 ------------------------------------------------------------------------------------ 00:05:21.061 0,0 569344/s 2224 MiB/s 0 0 00:05:21.061 ==================================================================================== 00:05:21.061 Total 569344/s 2224 MiB/s 0 0' 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.061 17:30:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:21.061 17:30:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:21.061 17:30:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.061 17:30:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.061 17:30:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.061 17:30:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.061 17:30:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.061 17:30:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.061 17:30:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.061 17:30:42 -- accel/accel.sh@42 -- # jq -r . 00:05:21.061 [2024-07-24 17:30:42.317234] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:21.061 [2024-07-24 17:30:42.317295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438531 ] 00:05:21.061 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.061 [2024-07-24 17:30:42.370407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.061 [2024-07-24 17:30:42.438313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.061 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.061 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.061 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.061 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.061 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.061 17:30:42 -- accel/accel.sh@21 -- # val=0x1 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=crc32c 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=32 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=software 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=32 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=32 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=1 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val=Yes 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:21.062 17:30:42 -- accel/accel.sh@21 -- # val= 00:05:21.062 17:30:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # IFS=: 00:05:21.062 17:30:42 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@21 -- # val= 00:05:22.441 17:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # IFS=: 00:05:22.441 17:30:43 -- accel/accel.sh@20 -- # read -r var val 00:05:22.441 17:30:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.441 17:30:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:22.441 17:30:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.441 00:05:22.441 real 0m2.690s 00:05:22.441 user 0m2.463s 00:05:22.441 sys 0m0.223s 00:05:22.441 17:30:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.441 17:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.441 ************************************ 00:05:22.441 END TEST accel_crc32c 00:05:22.441 ************************************ 00:05:22.441 17:30:43 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:22.441 17:30:43 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:22.441 17:30:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.441 17:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.441 ************************************ 00:05:22.441 START TEST accel_crc32c_C2 00:05:22.441 ************************************ 00:05:22.441 17:30:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:22.441 17:30:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.441 17:30:43 -- accel/accel.sh@17 -- # local accel_module 00:05:22.441 17:30:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:22.441 17:30:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:22.441 17:30:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.441 17:30:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.442 17:30:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.442 17:30:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.442 17:30:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.442 17:30:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.442 17:30:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.442 17:30:43 -- accel/accel.sh@42 -- # jq -r . 00:05:22.442 [2024-07-24 17:30:43.692245] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:22.442 [2024-07-24 17:30:43.692320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438786 ] 00:05:22.442 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.442 [2024-07-24 17:30:43.747105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.442 [2024-07-24 17:30:43.815819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.822 17:30:45 -- accel/accel.sh@18 -- # out=' 00:05:23.822 SPDK Configuration: 00:05:23.822 Core mask: 0x1 00:05:23.822 00:05:23.822 Accel Perf Configuration: 00:05:23.822 Workload Type: crc32c 00:05:23.822 CRC-32C seed: 0 00:05:23.822 Transfer size: 4096 bytes 00:05:23.822 Vector count 2 00:05:23.822 Module: software 00:05:23.822 Queue depth: 32 00:05:23.822 Allocate depth: 32 00:05:23.822 # threads/core: 1 00:05:23.822 Run time: 1 seconds 00:05:23.822 Verify: Yes 00:05:23.822 00:05:23.822 Running for 1 seconds... 00:05:23.822 00:05:23.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:23.822 ------------------------------------------------------------------------------------ 00:05:23.822 0,0 450208/s 3517 MiB/s 0 0 00:05:23.822 ==================================================================================== 00:05:23.822 Total 450208/s 1758 MiB/s 0 0' 00:05:23.822 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.822 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:23.823 17:30:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:23.823 17:30:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.823 17:30:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.823 17:30:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.823 17:30:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.823 17:30:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.823 17:30:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.823 17:30:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.823 17:30:45 -- accel/accel.sh@42 -- # jq -r . 00:05:23.823 [2024-07-24 17:30:45.027454] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:23.823 [2024-07-24 17:30:45.027520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439018 ] 00:05:23.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.823 [2024-07-24 17:30:45.079969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.823 [2024-07-24 17:30:45.147817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=0x1 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=crc32c 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=0 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=software 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=32 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=32 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=1 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val=Yes 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:23.823 17:30:45 -- accel/accel.sh@21 -- # val= 00:05:23.823 17:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # IFS=: 00:05:23.823 17:30:45 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@21 -- # val= 00:05:24.762 17:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # IFS=: 00:05:24.762 17:30:46 -- accel/accel.sh@20 -- # read -r var val 00:05:24.762 17:30:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:24.762 17:30:46 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:24.762 17:30:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.762 00:05:24.762 real 0m2.680s 00:05:24.762 user 0m2.460s 00:05:24.762 sys 0m0.217s 00:05:24.762 17:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.762 17:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:24.762 ************************************ 00:05:24.762 END TEST accel_crc32c_C2 00:05:24.762 ************************************ 00:05:25.022 17:30:46 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:25.022 17:30:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:25.022 17:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.022 17:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:25.022 ************************************ 00:05:25.022 START TEST accel_copy 00:05:25.022 ************************************ 00:05:25.022 17:30:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:25.022 17:30:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.022 17:30:46 -- accel/accel.sh@17 -- # local accel_module 00:05:25.022 17:30:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:25.022 17:30:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:25.022 17:30:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.022 17:30:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.022 17:30:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.022 17:30:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.022 17:30:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.022 17:30:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.022 17:30:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.022 17:30:46 -- accel/accel.sh@42 -- # jq -r . 00:05:25.022 [2024-07-24 17:30:46.408224] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:25.022 [2024-07-24 17:30:46.408284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439267 ] 00:05:25.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.022 [2024-07-24 17:30:46.464923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.022 [2024-07-24 17:30:46.536348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.403 17:30:47 -- accel/accel.sh@18 -- # out=' 00:05:26.403 SPDK Configuration: 00:05:26.403 Core mask: 0x1 00:05:26.403 00:05:26.403 Accel Perf Configuration: 00:05:26.403 Workload Type: copy 00:05:26.403 Transfer size: 4096 bytes 00:05:26.403 Vector count 1 00:05:26.403 Module: software 00:05:26.403 Queue depth: 32 00:05:26.403 Allocate depth: 32 00:05:26.403 # threads/core: 1 00:05:26.403 Run time: 1 seconds 00:05:26.403 Verify: Yes 00:05:26.403 00:05:26.403 Running for 1 seconds... 00:05:26.403 00:05:26.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.403 ------------------------------------------------------------------------------------ 00:05:26.403 0,0 428384/s 1673 MiB/s 0 0 00:05:26.403 ==================================================================================== 00:05:26.403 Total 428384/s 1673 MiB/s 0 0' 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:26.403 17:30:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:26.403 17:30:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.403 17:30:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.403 17:30:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.403 17:30:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.403 17:30:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.403 17:30:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.403 17:30:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.403 17:30:47 -- accel/accel.sh@42 -- # jq -r . 00:05:26.403 [2024-07-24 17:30:47.747322] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:26.403 [2024-07-24 17:30:47.747372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439512 ] 00:05:26.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.403 [2024-07-24 17:30:47.798554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.403 [2024-07-24 17:30:47.865983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=0x1 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=copy 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=software 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=32 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=32 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=1 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val=Yes 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:26.403 17:30:47 -- accel/accel.sh@21 -- # val= 00:05:26.403 17:30:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # IFS=: 00:05:26.403 17:30:47 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@21 -- # val= 00:05:27.822 17:30:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # IFS=: 00:05:27.822 17:30:49 -- accel/accel.sh@20 -- # read -r var val 00:05:27.822 17:30:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:27.822 17:30:49 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:27.822 17:30:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.822 00:05:27.822 real 0m2.679s 00:05:27.822 user 0m2.464s 00:05:27.822 sys 0m0.211s 00:05:27.822 17:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.822 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:27.822 ************************************ 00:05:27.822 END TEST accel_copy 00:05:27.822 ************************************ 00:05:27.822 17:30:49 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.822 17:30:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:27.822 17:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.822 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:27.822 ************************************ 00:05:27.822 START TEST accel_fill 00:05:27.822 ************************************ 00:05:27.823 17:30:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.823 17:30:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.823 17:30:49 -- accel/accel.sh@17 -- # local accel_module 00:05:27.823 17:30:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.823 17:30:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.823 17:30:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.823 17:30:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.823 17:30:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.823 17:30:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.823 17:30:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.823 17:30:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.823 17:30:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.823 17:30:49 -- accel/accel.sh@42 -- # jq -r . 00:05:27.823 [2024-07-24 17:30:49.117177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:27.823 [2024-07-24 17:30:49.117253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439759 ] 00:05:27.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.823 [2024-07-24 17:30:49.171810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.823 [2024-07-24 17:30:49.240305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.203 17:30:50 -- accel/accel.sh@18 -- # out=' 00:05:29.203 SPDK Configuration: 00:05:29.203 Core mask: 0x1 00:05:29.203 00:05:29.203 Accel Perf Configuration: 00:05:29.203 Workload Type: fill 00:05:29.203 Fill pattern: 0x80 00:05:29.203 Transfer size: 4096 bytes 00:05:29.203 Vector count 1 00:05:29.203 Module: software 00:05:29.203 Queue depth: 64 00:05:29.203 Allocate depth: 64 00:05:29.203 # threads/core: 1 00:05:29.203 Run time: 1 seconds 00:05:29.203 Verify: Yes 00:05:29.203 00:05:29.203 Running for 1 seconds... 00:05:29.203 00:05:29.203 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.203 ------------------------------------------------------------------------------------ 00:05:29.203 0,0 654464/s 2556 MiB/s 0 0 00:05:29.203 ==================================================================================== 00:05:29.203 Total 654464/s 2556 MiB/s 0 0' 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.203 17:30:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.203 17:30:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.203 17:30:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.203 17:30:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.203 17:30:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.203 17:30:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.203 17:30:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.203 17:30:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.203 17:30:50 -- accel/accel.sh@42 -- # jq -r . 00:05:29.203 [2024-07-24 17:30:50.451164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:29.203 [2024-07-24 17:30:50.451225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439996 ] 00:05:29.203 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.203 [2024-07-24 17:30:50.503861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.203 [2024-07-24 17:30:50.571701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=0x1 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=fill 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=0x80 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=software 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=64 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=64 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=1 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val=Yes 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:29.203 17:30:50 -- accel/accel.sh@21 -- # val= 00:05:29.203 17:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # IFS=: 00:05:29.203 17:30:50 -- accel/accel.sh@20 -- # read -r var val 00:05:30.583 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@21 -- # val= 00:05:30.584 17:30:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # IFS=: 00:05:30.584 17:30:51 -- accel/accel.sh@20 -- # read -r var val 00:05:30.584 17:30:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:30.584 17:30:51 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:30.584 17:30:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.584 00:05:30.584 real 0m2.680s 00:05:30.584 user 0m2.467s 00:05:30.584 sys 0m0.208s 00:05:30.584 17:30:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.584 17:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 ************************************ 00:05:30.584 END TEST accel_fill 00:05:30.584 ************************************ 00:05:30.584 17:30:51 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:30.584 17:30:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:30.584 17:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.584 17:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 ************************************ 00:05:30.584 START TEST accel_copy_crc32c 00:05:30.584 ************************************ 00:05:30.584 17:30:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:30.584 17:30:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.584 17:30:51 -- accel/accel.sh@17 -- # local accel_module 00:05:30.584 17:30:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:30.584 17:30:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.584 17:30:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:30.584 17:30:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.584 17:30:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.584 17:30:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.584 17:30:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.584 17:30:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.584 17:30:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.584 17:30:51 -- accel/accel.sh@42 -- # jq -r . 00:05:30.584 [2024-07-24 17:30:51.833901] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:30.584 [2024-07-24 17:30:51.833974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440252 ] 00:05:30.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.584 [2024-07-24 17:30:51.889313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.584 [2024-07-24 17:30:51.958442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.964 17:30:53 -- accel/accel.sh@18 -- # out=' 00:05:31.964 SPDK Configuration: 00:05:31.964 Core mask: 0x1 00:05:31.964 00:05:31.964 Accel Perf Configuration: 00:05:31.964 Workload Type: copy_crc32c 00:05:31.964 CRC-32C seed: 0 00:05:31.964 Vector size: 4096 bytes 00:05:31.964 Transfer size: 4096 bytes 00:05:31.964 Vector count 1 00:05:31.964 Module: software 00:05:31.964 Queue depth: 32 00:05:31.964 Allocate depth: 32 00:05:31.964 # threads/core: 1 00:05:31.964 Run time: 1 seconds 00:05:31.964 Verify: Yes 00:05:31.964 00:05:31.964 Running for 1 seconds... 00:05:31.964 00:05:31.964 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:31.964 ------------------------------------------------------------------------------------ 00:05:31.964 0,0 327904/s 1280 MiB/s 0 0 00:05:31.964 ==================================================================================== 00:05:31.964 Total 327904/s 1280 MiB/s 0 0' 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:31.964 17:30:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:31.964 17:30:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.964 17:30:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.964 17:30:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.964 17:30:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.964 17:30:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.964 17:30:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.964 17:30:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.964 17:30:53 -- accel/accel.sh@42 -- # jq -r . 00:05:31.964 [2024-07-24 17:30:53.180882] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:31.964 [2024-07-24 17:30:53.180938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440486 ] 00:05:31.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.964 [2024-07-24 17:30:53.234708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.964 [2024-07-24 17:30:53.304230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=0x1 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=0 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=software 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=32 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=32 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=1 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val=Yes 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:31.964 17:30:53 -- accel/accel.sh@21 -- # val= 00:05:31.964 17:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # IFS=: 00:05:31.964 17:30:53 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@21 -- # val= 00:05:33.340 17:30:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # IFS=: 00:05:33.340 17:30:54 -- accel/accel.sh@20 -- # read -r var val 00:05:33.340 17:30:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:33.340 17:30:54 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:33.340 17:30:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.340 00:05:33.340 real 0m2.699s 00:05:33.340 user 0m2.490s 00:05:33.340 sys 0m0.218s 00:05:33.340 17:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.340 17:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.340 ************************************ 00:05:33.340 END TEST accel_copy_crc32c 00:05:33.340 ************************************ 00:05:33.340 17:30:54 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.340 17:30:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:33.340 17:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.340 17:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.340 ************************************ 00:05:33.340 START TEST accel_copy_crc32c_C2 00:05:33.340 ************************************ 00:05:33.340 17:30:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.340 17:30:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.340 17:30:54 -- accel/accel.sh@17 -- # local accel_module 00:05:33.341 17:30:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:33.341 17:30:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:33.341 17:30:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.341 17:30:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.341 17:30:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.341 17:30:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.341 17:30:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.341 17:30:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.341 17:30:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.341 17:30:54 -- accel/accel.sh@42 -- # jq -r . 00:05:33.341 [2024-07-24 17:30:54.557869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:33.341 [2024-07-24 17:30:54.557921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440733 ] 00:05:33.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.341 [2024-07-24 17:30:54.605479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.341 [2024-07-24 17:30:54.674833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.277 17:30:55 -- accel/accel.sh@18 -- # out=' 00:05:34.277 SPDK Configuration: 00:05:34.277 Core mask: 0x1 00:05:34.277 00:05:34.277 Accel Perf Configuration: 00:05:34.277 Workload Type: copy_crc32c 00:05:34.277 CRC-32C seed: 0 00:05:34.277 Vector size: 4096 bytes 00:05:34.277 Transfer size: 8192 bytes 00:05:34.277 Vector count 2 00:05:34.277 Module: software 00:05:34.277 Queue depth: 32 00:05:34.277 Allocate depth: 32 00:05:34.277 # threads/core: 1 00:05:34.277 Run time: 1 seconds 00:05:34.277 Verify: Yes 00:05:34.277 00:05:34.277 Running for 1 seconds... 00:05:34.277 00:05:34.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.277 ------------------------------------------------------------------------------------ 00:05:34.277 0,0 236864/s 1850 MiB/s 0 0 00:05:34.277 ==================================================================================== 00:05:34.277 Total 236864/s 925 MiB/s 0 0' 00:05:34.277 17:30:55 -- accel/accel.sh@20 -- # IFS=: 00:05:34.277 17:30:55 -- accel/accel.sh@20 -- # read -r var val 00:05:34.277 17:30:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:34.277 17:30:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:34.536 17:30:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.536 17:30:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.536 17:30:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.536 17:30:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.536 17:30:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.536 17:30:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.536 17:30:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.536 17:30:55 -- accel/accel.sh@42 -- # jq -r . 00:05:34.536 [2024-07-24 17:30:55.899204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:34.536 [2024-07-24 17:30:55.899284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440973 ] 00:05:34.536 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.536 [2024-07-24 17:30:55.953810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.536 [2024-07-24 17:30:56.022700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=0x1 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=0 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=software 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=32 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=32 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=1 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val=Yes 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:34.536 17:30:56 -- accel/accel.sh@21 -- # val= 00:05:34.536 17:30:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # IFS=: 00:05:34.536 17:30:56 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@21 -- # val= 00:05:35.913 17:30:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # IFS=: 00:05:35.913 17:30:57 -- accel/accel.sh@20 -- # read -r var val 00:05:35.913 17:30:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:35.913 17:30:57 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:35.913 17:30:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.913 00:05:35.913 real 0m2.684s 00:05:35.913 user 0m2.480s 00:05:35.913 sys 0m0.213s 00:05:35.913 17:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.913 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:05:35.913 ************************************ 00:05:35.913 END TEST accel_copy_crc32c_C2 00:05:35.913 ************************************ 00:05:35.913 17:30:57 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:35.913 17:30:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:35.913 17:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.913 17:30:57 -- common/autotest_common.sh@10 -- # set +x 00:05:35.913 ************************************ 00:05:35.913 START TEST accel_dualcast 00:05:35.913 ************************************ 00:05:35.913 17:30:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:35.913 17:30:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.913 17:30:57 -- accel/accel.sh@17 -- # local accel_module 00:05:35.913 17:30:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:35.913 17:30:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.913 17:30:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:35.913 17:30:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.913 17:30:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.913 17:30:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.913 17:30:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.913 17:30:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.913 17:30:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.913 17:30:57 -- accel/accel.sh@42 -- # jq -r . 00:05:35.913 [2024-07-24 17:30:57.291822] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:35.913 [2024-07-24 17:30:57.291898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441223 ] 00:05:35.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.913 [2024-07-24 17:30:57.346257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.913 [2024-07-24 17:30:57.416103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.292 17:30:58 -- accel/accel.sh@18 -- # out=' 00:05:37.292 SPDK Configuration: 00:05:37.292 Core mask: 0x1 00:05:37.292 00:05:37.292 Accel Perf Configuration: 00:05:37.293 Workload Type: dualcast 00:05:37.293 Transfer size: 4096 bytes 00:05:37.293 Vector count 1 00:05:37.293 Module: software 00:05:37.293 Queue depth: 32 00:05:37.293 Allocate depth: 32 00:05:37.293 # threads/core: 1 00:05:37.293 Run time: 1 seconds 00:05:37.293 Verify: Yes 00:05:37.293 00:05:37.293 Running for 1 seconds... 00:05:37.293 00:05:37.293 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.293 ------------------------------------------------------------------------------------ 00:05:37.293 0,0 498272/s 1946 MiB/s 0 0 00:05:37.293 ==================================================================================== 00:05:37.293 Total 498272/s 1946 MiB/s 0 0' 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:37.293 17:30:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:37.293 17:30:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.293 17:30:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.293 17:30:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.293 17:30:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.293 17:30:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.293 17:30:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.293 17:30:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.293 17:30:58 -- accel/accel.sh@42 -- # jq -r . 00:05:37.293 [2024-07-24 17:30:58.638568] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:37.293 [2024-07-24 17:30:58.638624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441457 ] 00:05:37.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.293 [2024-07-24 17:30:58.691759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.293 [2024-07-24 17:30:58.759405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=0x1 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=dualcast 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=software 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=32 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=32 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=1 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val=Yes 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:37.293 17:30:58 -- accel/accel.sh@21 -- # val= 00:05:37.293 17:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # IFS=: 00:05:37.293 17:30:58 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.670 17:30:59 -- accel/accel.sh@21 -- # val= 00:05:38.670 17:30:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.670 17:30:59 -- accel/accel.sh@20 -- # IFS=: 00:05:38.671 17:30:59 -- accel/accel.sh@20 -- # read -r var val 00:05:38.671 17:30:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:38.671 17:30:59 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:38.671 17:30:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.671 00:05:38.671 real 0m2.697s 00:05:38.671 user 0m2.481s 00:05:38.671 sys 0m0.222s 00:05:38.671 17:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.671 17:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:38.671 ************************************ 00:05:38.671 END TEST accel_dualcast 00:05:38.671 ************************************ 00:05:38.671 17:30:59 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:38.671 17:30:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:38.671 17:30:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.671 17:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:38.671 ************************************ 00:05:38.671 START TEST accel_compare 00:05:38.671 ************************************ 00:05:38.671 17:30:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:38.671 17:30:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.671 17:30:59 -- accel/accel.sh@17 -- # local accel_module 00:05:38.671 17:30:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:38.671 17:30:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:38.671 17:30:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.671 17:30:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.671 17:30:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.671 17:30:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.671 17:31:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.671 17:31:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.671 17:31:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.671 17:31:00 -- accel/accel.sh@42 -- # jq -r . 00:05:38.671 [2024-07-24 17:31:00.022900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:38.671 [2024-07-24 17:31:00.022956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441710 ] 00:05:38.671 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.671 [2024-07-24 17:31:00.077458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.671 [2024-07-24 17:31:00.146626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.048 17:31:01 -- accel/accel.sh@18 -- # out=' 00:05:40.048 SPDK Configuration: 00:05:40.048 Core mask: 0x1 00:05:40.048 00:05:40.048 Accel Perf Configuration: 00:05:40.048 Workload Type: compare 00:05:40.048 Transfer size: 4096 bytes 00:05:40.048 Vector count 1 00:05:40.048 Module: software 00:05:40.048 Queue depth: 32 00:05:40.048 Allocate depth: 32 00:05:40.048 # threads/core: 1 00:05:40.048 Run time: 1 seconds 00:05:40.048 Verify: Yes 00:05:40.048 00:05:40.048 Running for 1 seconds... 00:05:40.048 00:05:40.048 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.048 ------------------------------------------------------------------------------------ 00:05:40.048 0,0 613760/s 2397 MiB/s 0 0 00:05:40.048 ==================================================================================== 00:05:40.048 Total 613760/s 2397 MiB/s 0 0' 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:40.048 17:31:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:40.048 17:31:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.048 17:31:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.048 17:31:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.048 17:31:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.048 17:31:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.048 17:31:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.048 17:31:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.048 17:31:01 -- accel/accel.sh@42 -- # jq -r . 00:05:40.048 [2024-07-24 17:31:01.367475] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:40.048 [2024-07-24 17:31:01.367533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441946 ] 00:05:40.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.048 [2024-07-24 17:31:01.420803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.048 [2024-07-24 17:31:01.492255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=0x1 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=compare 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=software 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=32 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=32 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=1 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val=Yes 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:40.048 17:31:01 -- accel/accel.sh@21 -- # val= 00:05:40.048 17:31:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # IFS=: 00:05:40.048 17:31:01 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@21 -- # val= 00:05:41.425 17:31:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # IFS=: 00:05:41.425 17:31:02 -- accel/accel.sh@20 -- # read -r var val 00:05:41.425 17:31:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:41.425 17:31:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:41.425 17:31:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.425 00:05:41.425 real 0m2.697s 00:05:41.425 user 0m2.483s 00:05:41.425 sys 0m0.221s 00:05:41.425 17:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.425 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.425 ************************************ 00:05:41.425 END TEST accel_compare 00:05:41.425 ************************************ 00:05:41.426 17:31:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:41.426 17:31:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:41.426 17:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.426 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.426 ************************************ 00:05:41.426 START TEST accel_xor 00:05:41.426 ************************************ 00:05:41.426 17:31:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:41.426 17:31:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.426 17:31:02 -- accel/accel.sh@17 -- # local accel_module 00:05:41.426 17:31:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:41.426 17:31:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.426 17:31:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:41.426 17:31:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.426 17:31:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.426 17:31:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.426 17:31:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.426 17:31:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.426 17:31:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.426 17:31:02 -- accel/accel.sh@42 -- # jq -r . 00:05:41.426 [2024-07-24 17:31:02.759233] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:41.426 [2024-07-24 17:31:02.759310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442219 ] 00:05:41.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.426 [2024-07-24 17:31:02.813453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.426 [2024-07-24 17:31:02.882216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.802 17:31:04 -- accel/accel.sh@18 -- # out=' 00:05:42.802 SPDK Configuration: 00:05:42.802 Core mask: 0x1 00:05:42.802 00:05:42.802 Accel Perf Configuration: 00:05:42.802 Workload Type: xor 00:05:42.802 Source buffers: 2 00:05:42.802 Transfer size: 4096 bytes 00:05:42.802 Vector count 1 00:05:42.802 Module: software 00:05:42.802 Queue depth: 32 00:05:42.802 Allocate depth: 32 00:05:42.802 # threads/core: 1 00:05:42.802 Run time: 1 seconds 00:05:42.802 Verify: Yes 00:05:42.802 00:05:42.802 Running for 1 seconds... 00:05:42.802 00:05:42.802 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.802 ------------------------------------------------------------------------------------ 00:05:42.802 0,0 481472/s 1880 MiB/s 0 0 00:05:42.802 ==================================================================================== 00:05:42.802 Total 481472/s 1880 MiB/s 0 0' 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:42.802 17:31:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:42.802 17:31:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.802 17:31:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.802 17:31:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.802 17:31:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.802 17:31:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.802 17:31:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.802 17:31:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.802 17:31:04 -- accel/accel.sh@42 -- # jq -r . 00:05:42.802 [2024-07-24 17:31:04.104553] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:42.802 [2024-07-24 17:31:04.104611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442451 ] 00:05:42.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.802 [2024-07-24 17:31:04.157699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.802 [2024-07-24 17:31:04.225688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=0x1 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=xor 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=2 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=software 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=32 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=32 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=1 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val=Yes 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:42.802 17:31:04 -- accel/accel.sh@21 -- # val= 00:05:42.802 17:31:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # IFS=: 00:05:42.802 17:31:04 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@21 -- # val= 00:05:44.179 17:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # IFS=: 00:05:44.179 17:31:05 -- accel/accel.sh@20 -- # read -r var val 00:05:44.179 17:31:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.180 17:31:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:44.180 17:31:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.180 00:05:44.180 real 0m2.696s 00:05:44.180 user 0m2.487s 00:05:44.180 sys 0m0.218s 00:05:44.180 17:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.180 17:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.180 ************************************ 00:05:44.180 END TEST accel_xor 00:05:44.180 ************************************ 00:05:44.180 17:31:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:44.180 17:31:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:44.180 17:31:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.180 17:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.180 ************************************ 00:05:44.180 START TEST accel_xor 00:05:44.180 ************************************ 00:05:44.180 17:31:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:44.180 17:31:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.180 17:31:05 -- accel/accel.sh@17 -- # local accel_module 00:05:44.180 17:31:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:44.180 17:31:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.180 17:31:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:44.180 17:31:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.180 17:31:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.180 17:31:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.180 17:31:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.180 17:31:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.180 17:31:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.180 17:31:05 -- accel/accel.sh@42 -- # jq -r . 00:05:44.180 [2024-07-24 17:31:05.490685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.180 [2024-07-24 17:31:05.490749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442700 ] 00:05:44.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.180 [2024-07-24 17:31:05.545751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.180 [2024-07-24 17:31:05.614183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.557 17:31:06 -- accel/accel.sh@18 -- # out=' 00:05:45.557 SPDK Configuration: 00:05:45.557 Core mask: 0x1 00:05:45.557 00:05:45.557 Accel Perf Configuration: 00:05:45.557 Workload Type: xor 00:05:45.557 Source buffers: 3 00:05:45.557 Transfer size: 4096 bytes 00:05:45.557 Vector count 1 00:05:45.557 Module: software 00:05:45.557 Queue depth: 32 00:05:45.557 Allocate depth: 32 00:05:45.557 # threads/core: 1 00:05:45.557 Run time: 1 seconds 00:05:45.557 Verify: Yes 00:05:45.557 00:05:45.557 Running for 1 seconds... 00:05:45.557 00:05:45.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.557 ------------------------------------------------------------------------------------ 00:05:45.557 0,0 459424/s 1794 MiB/s 0 0 00:05:45.557 ==================================================================================== 00:05:45.557 Total 459424/s 1794 MiB/s 0 0' 00:05:45.557 17:31:06 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:06 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:45.557 17:31:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:45.557 17:31:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.557 17:31:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.557 17:31:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.557 17:31:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.557 17:31:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.557 17:31:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.557 17:31:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.557 17:31:06 -- accel/accel.sh@42 -- # jq -r . 00:05:45.557 [2024-07-24 17:31:06.838681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.557 [2024-07-24 17:31:06.838757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442940 ] 00:05:45.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.557 [2024-07-24 17:31:06.893012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.557 [2024-07-24 17:31:06.961034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=0x1 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=xor 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=3 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=software 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=32 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=32 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=1 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val=Yes 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:45.557 17:31:07 -- accel/accel.sh@21 -- # val= 00:05:45.557 17:31:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # IFS=: 00:05:45.557 17:31:07 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@21 -- # val= 00:05:46.936 17:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # IFS=: 00:05:46.936 17:31:08 -- accel/accel.sh@20 -- # read -r var val 00:05:46.936 17:31:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.936 17:31:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:46.936 17:31:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.936 00:05:46.936 real 0m2.702s 00:05:46.936 user 0m2.491s 00:05:46.936 sys 0m0.217s 00:05:46.936 17:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.936 17:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.936 ************************************ 00:05:46.936 END TEST accel_xor 00:05:46.936 ************************************ 00:05:46.936 17:31:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:46.936 17:31:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:46.936 17:31:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.936 17:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.936 ************************************ 00:05:46.936 START TEST accel_dif_verify 00:05:46.936 ************************************ 00:05:46.936 17:31:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:46.936 17:31:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.936 17:31:08 -- accel/accel.sh@17 -- # local accel_module 00:05:46.936 17:31:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:46.936 17:31:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.936 17:31:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:46.936 17:31:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.936 17:31:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.936 17:31:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.936 17:31:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.936 17:31:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.936 17:31:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.936 17:31:08 -- accel/accel.sh@42 -- # jq -r . 00:05:46.936 [2024-07-24 17:31:08.227128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.936 [2024-07-24 17:31:08.227186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443191 ] 00:05:46.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.936 [2024-07-24 17:31:08.279531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.936 [2024-07-24 17:31:08.348297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.311 17:31:09 -- accel/accel.sh@18 -- # out=' 00:05:48.311 SPDK Configuration: 00:05:48.311 Core mask: 0x1 00:05:48.311 00:05:48.311 Accel Perf Configuration: 00:05:48.311 Workload Type: dif_verify 00:05:48.311 Vector size: 4096 bytes 00:05:48.311 Transfer size: 4096 bytes 00:05:48.311 Block size: 512 bytes 00:05:48.311 Metadata size: 8 bytes 00:05:48.311 Vector count 1 00:05:48.311 Module: software 00:05:48.311 Queue depth: 32 00:05:48.311 Allocate depth: 32 00:05:48.311 # threads/core: 1 00:05:48.311 Run time: 1 seconds 00:05:48.311 Verify: No 00:05:48.311 00:05:48.311 Running for 1 seconds... 00:05:48.311 00:05:48.311 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.311 ------------------------------------------------------------------------------------ 00:05:48.311 0,0 129792/s 514 MiB/s 0 0 00:05:48.311 ==================================================================================== 00:05:48.311 Total 129792/s 507 MiB/s 0 0' 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:48.311 17:31:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:48.311 17:31:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.311 17:31:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.311 17:31:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.311 17:31:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.311 17:31:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.311 17:31:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.311 17:31:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.311 17:31:09 -- accel/accel.sh@42 -- # jq -r . 00:05:48.311 [2024-07-24 17:31:09.570729] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:48.311 [2024-07-24 17:31:09.570788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443423 ] 00:05:48.311 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.311 [2024-07-24 17:31:09.623544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.311 [2024-07-24 17:31:09.693169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=0x1 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=dif_verify 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=software 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=32 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=32 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=1 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val=No 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:48.311 17:31:09 -- accel/accel.sh@21 -- # val= 00:05:48.311 17:31:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # IFS=: 00:05:48.311 17:31:09 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@21 -- # val= 00:05:49.686 17:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # IFS=: 00:05:49.686 17:31:10 -- accel/accel.sh@20 -- # read -r var val 00:05:49.686 17:31:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.686 17:31:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:49.686 17:31:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.686 00:05:49.686 real 0m2.694s 00:05:49.686 user 0m2.483s 00:05:49.686 sys 0m0.222s 00:05:49.686 17:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.686 17:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:49.686 ************************************ 00:05:49.686 END TEST accel_dif_verify 00:05:49.686 ************************************ 00:05:49.686 17:31:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:49.686 17:31:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:49.686 17:31:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.686 17:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:49.686 ************************************ 00:05:49.686 START TEST accel_dif_generate 00:05:49.686 ************************************ 00:05:49.686 17:31:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:49.686 17:31:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.686 17:31:10 -- accel/accel.sh@17 -- # local accel_module 00:05:49.686 17:31:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:49.686 17:31:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:49.686 17:31:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.686 17:31:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.686 17:31:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.686 17:31:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.686 17:31:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.686 17:31:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.686 17:31:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.686 17:31:10 -- accel/accel.sh@42 -- # jq -r . 00:05:49.686 [2024-07-24 17:31:10.956914] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.686 [2024-07-24 17:31:10.956974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443678 ] 00:05:49.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.686 [2024-07-24 17:31:11.010415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.686 [2024-07-24 17:31:11.084139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.063 17:31:12 -- accel/accel.sh@18 -- # out=' 00:05:51.063 SPDK Configuration: 00:05:51.063 Core mask: 0x1 00:05:51.063 00:05:51.063 Accel Perf Configuration: 00:05:51.063 Workload Type: dif_generate 00:05:51.063 Vector size: 4096 bytes 00:05:51.063 Transfer size: 4096 bytes 00:05:51.063 Block size: 512 bytes 00:05:51.063 Metadata size: 8 bytes 00:05:51.063 Vector count 1 00:05:51.063 Module: software 00:05:51.063 Queue depth: 32 00:05:51.063 Allocate depth: 32 00:05:51.063 # threads/core: 1 00:05:51.063 Run time: 1 seconds 00:05:51.063 Verify: No 00:05:51.063 00:05:51.063 Running for 1 seconds... 00:05:51.063 00:05:51.063 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.063 ------------------------------------------------------------------------------------ 00:05:51.063 0,0 156672/s 621 MiB/s 0 0 00:05:51.063 ==================================================================================== 00:05:51.063 Total 156672/s 612 MiB/s 0 0' 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:51.063 17:31:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:51.063 17:31:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.063 17:31:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.063 17:31:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.063 17:31:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.063 17:31:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.063 17:31:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.063 17:31:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.063 17:31:12 -- accel/accel.sh@42 -- # jq -r . 00:05:51.063 [2024-07-24 17:31:12.309863] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.063 [2024-07-24 17:31:12.309939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443914 ] 00:05:51.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.063 [2024-07-24 17:31:12.364337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.063 [2024-07-24 17:31:12.432491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=0x1 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=dif_generate 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=software 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=32 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=32 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=1 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val=No 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 17:31:12 -- accel/accel.sh@21 -- # val= 00:05:51.063 17:31:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 17:31:12 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.452 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.452 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.452 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.452 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.452 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.452 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.452 17:31:13 -- accel/accel.sh@21 -- # val= 00:05:52.453 17:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.453 17:31:13 -- accel/accel.sh@20 -- # IFS=: 00:05:52.453 17:31:13 -- accel/accel.sh@20 -- # read -r var val 00:05:52.453 17:31:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.453 17:31:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:52.453 17:31:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.453 00:05:52.453 real 0m2.705s 00:05:52.453 user 0m2.493s 00:05:52.453 sys 0m0.221s 00:05:52.453 17:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.453 17:31:13 -- common/autotest_common.sh@10 -- # set +x 00:05:52.453 ************************************ 00:05:52.453 END TEST accel_dif_generate 00:05:52.453 ************************************ 00:05:52.453 17:31:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:52.453 17:31:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:52.453 17:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.453 17:31:13 -- common/autotest_common.sh@10 -- # set +x 00:05:52.453 ************************************ 00:05:52.453 START TEST accel_dif_generate_copy 00:05:52.453 ************************************ 00:05:52.453 17:31:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:52.453 17:31:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.453 17:31:13 -- accel/accel.sh@17 -- # local accel_module 00:05:52.453 17:31:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:52.453 17:31:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.453 17:31:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:52.453 17:31:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.453 17:31:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.453 17:31:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.453 17:31:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.453 17:31:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.453 17:31:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.453 17:31:13 -- accel/accel.sh@42 -- # jq -r . 00:05:52.453 [2024-07-24 17:31:13.696059] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:52.453 [2024-07-24 17:31:13.696135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444163 ] 00:05:52.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.453 [2024-07-24 17:31:13.749873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.453 [2024-07-24 17:31:13.818532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.828 17:31:15 -- accel/accel.sh@18 -- # out=' 00:05:53.828 SPDK Configuration: 00:05:53.828 Core mask: 0x1 00:05:53.828 00:05:53.828 Accel Perf Configuration: 00:05:53.828 Workload Type: dif_generate_copy 00:05:53.828 Vector size: 4096 bytes 00:05:53.828 Transfer size: 4096 bytes 00:05:53.828 Vector count 1 00:05:53.828 Module: software 00:05:53.828 Queue depth: 32 00:05:53.828 Allocate depth: 32 00:05:53.828 # threads/core: 1 00:05:53.828 Run time: 1 seconds 00:05:53.828 Verify: No 00:05:53.828 00:05:53.828 Running for 1 seconds... 00:05:53.828 00:05:53.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.828 ------------------------------------------------------------------------------------ 00:05:53.828 0,0 120448/s 477 MiB/s 0 0 00:05:53.828 ==================================================================================== 00:05:53.828 Total 120448/s 470 MiB/s 0 0' 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.828 17:31:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.828 17:31:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.828 17:31:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.828 17:31:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.828 17:31:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.828 17:31:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.828 17:31:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.828 17:31:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.828 17:31:15 -- accel/accel.sh@42 -- # jq -r . 00:05:53.828 [2024-07-24 17:31:15.041234] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.828 [2024-07-24 17:31:15.041290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444405 ] 00:05:53.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.828 [2024-07-24 17:31:15.094260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.828 [2024-07-24 17:31:15.161932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=0x1 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=software 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=32 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=32 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=1 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val=No 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:53.828 17:31:15 -- accel/accel.sh@21 -- # val= 00:05:53.828 17:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # IFS=: 00:05:53.828 17:31:15 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:54.765 17:31:16 -- accel/accel.sh@21 -- # val= 00:05:54.765 17:31:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # IFS=: 00:05:54.765 17:31:16 -- accel/accel.sh@20 -- # read -r var val 00:05:55.024 17:31:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.024 17:31:16 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:55.024 17:31:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.024 00:05:55.024 real 0m2.694s 00:05:55.024 user 0m2.481s 00:05:55.024 sys 0m0.221s 00:05:55.024 17:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.024 17:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.024 ************************************ 00:05:55.024 END TEST accel_dif_generate_copy 00:05:55.024 ************************************ 00:05:55.024 17:31:16 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:55.024 17:31:16 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.024 17:31:16 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:55.024 17:31:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.024 17:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.024 ************************************ 00:05:55.024 START TEST accel_comp 00:05:55.024 ************************************ 00:05:55.024 17:31:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.024 17:31:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.024 17:31:16 -- accel/accel.sh@17 -- # local accel_module 00:05:55.024 17:31:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.024 17:31:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.024 17:31:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.024 17:31:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.024 17:31:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.024 17:31:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.024 17:31:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.024 17:31:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.024 17:31:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.024 17:31:16 -- accel/accel.sh@42 -- # jq -r . 00:05:55.024 [2024-07-24 17:31:16.416536] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.024 [2024-07-24 17:31:16.416581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444652 ] 00:05:55.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.024 [2024-07-24 17:31:16.465421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.024 [2024-07-24 17:31:16.542325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.400 17:31:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:56.400 00:05:56.400 SPDK Configuration: 00:05:56.400 Core mask: 0x1 00:05:56.400 00:05:56.400 Accel Perf Configuration: 00:05:56.400 Workload Type: compress 00:05:56.400 Transfer size: 4096 bytes 00:05:56.400 Vector count 1 00:05:56.400 Module: software 00:05:56.400 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.400 Queue depth: 32 00:05:56.400 Allocate depth: 32 00:05:56.400 # threads/core: 1 00:05:56.400 Run time: 1 seconds 00:05:56.400 Verify: No 00:05:56.400 00:05:56.400 Running for 1 seconds... 00:05:56.400 00:05:56.400 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.400 ------------------------------------------------------------------------------------ 00:05:56.400 0,0 62176/s 259 MiB/s 0 0 00:05:56.400 ==================================================================================== 00:05:56.400 Total 62176/s 242 MiB/s 0 0' 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.400 17:31:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.400 17:31:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.400 17:31:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.400 17:31:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.400 17:31:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.400 17:31:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.400 17:31:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.400 17:31:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.400 17:31:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.400 17:31:17 -- accel/accel.sh@42 -- # jq -r . 00:05:56.400 [2024-07-24 17:31:17.768617] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.400 [2024-07-24 17:31:17.768691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444890 ] 00:05:56.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.400 [2024-07-24 17:31:17.822754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.400 [2024-07-24 17:31:17.890798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.400 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.400 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.400 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.400 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.400 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.400 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.400 17:31:17 -- accel/accel.sh@21 -- # val=0x1 00:05:56.400 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.400 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=compress 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=software 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=32 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=32 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=1 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val=No 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:56.401 17:31:17 -- accel/accel.sh@21 -- # val= 00:05:56.401 17:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # IFS=: 00:05:56.401 17:31:17 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@21 -- # val= 00:05:57.777 17:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # IFS=: 00:05:57.777 17:31:19 -- accel/accel.sh@20 -- # read -r var val 00:05:57.777 17:31:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:57.777 17:31:19 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:57.777 17:31:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.777 00:05:57.777 real 0m2.697s 00:05:57.777 user 0m2.496s 00:05:57.777 sys 0m0.209s 00:05:57.777 17:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.777 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.777 ************************************ 00:05:57.777 END TEST accel_comp 00:05:57.777 ************************************ 00:05:57.777 17:31:19 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.777 17:31:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:57.777 17:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.777 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.777 ************************************ 00:05:57.777 START TEST accel_decomp 00:05:57.777 ************************************ 00:05:57.777 17:31:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.777 17:31:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.777 17:31:19 -- accel/accel.sh@17 -- # local accel_module 00:05:57.777 17:31:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.777 17:31:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.777 17:31:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.777 17:31:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.777 17:31:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.777 17:31:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.777 17:31:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.777 17:31:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.777 17:31:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.777 17:31:19 -- accel/accel.sh@42 -- # jq -r . 00:05:57.777 [2024-07-24 17:31:19.153631] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.777 [2024-07-24 17:31:19.153692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445143 ] 00:05:57.778 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.778 [2024-07-24 17:31:19.201933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.778 [2024-07-24 17:31:19.271327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.153 17:31:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:59.153 00:05:59.153 SPDK Configuration: 00:05:59.153 Core mask: 0x1 00:05:59.153 00:05:59.153 Accel Perf Configuration: 00:05:59.153 Workload Type: decompress 00:05:59.153 Transfer size: 4096 bytes 00:05:59.153 Vector count 1 00:05:59.153 Module: software 00:05:59.153 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.153 Queue depth: 32 00:05:59.153 Allocate depth: 32 00:05:59.153 # threads/core: 1 00:05:59.153 Run time: 1 seconds 00:05:59.153 Verify: Yes 00:05:59.153 00:05:59.153 Running for 1 seconds... 00:05:59.153 00:05:59.153 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.153 ------------------------------------------------------------------------------------ 00:05:59.153 0,0 71904/s 132 MiB/s 0 0 00:05:59.153 ==================================================================================== 00:05:59.153 Total 71904/s 280 MiB/s 0 0' 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.153 17:31:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.153 17:31:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.153 17:31:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.153 17:31:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.153 17:31:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.153 17:31:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.153 17:31:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.153 17:31:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.153 17:31:20 -- accel/accel.sh@42 -- # jq -r . 00:05:59.153 [2024-07-24 17:31:20.496145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.153 [2024-07-24 17:31:20.496214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445377 ] 00:05:59.153 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.153 [2024-07-24 17:31:20.550698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.153 [2024-07-24 17:31:20.619262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val=0x1 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val=decompress 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val=software 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.153 17:31:20 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.153 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.153 17:31:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.153 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val=32 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val=32 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val=1 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val=Yes 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:05:59.154 17:31:20 -- accel/accel.sh@21 -- # val= 00:05:59.154 17:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # IFS=: 00:05:59.154 17:31:20 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@21 -- # val= 00:06:00.532 17:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # IFS=: 00:06:00.532 17:31:21 -- accel/accel.sh@20 -- # read -r var val 00:06:00.532 17:31:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.532 17:31:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:00.532 17:31:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.532 00:06:00.532 real 0m2.688s 00:06:00.532 user 0m2.474s 00:06:00.532 sys 0m0.222s 00:06:00.532 17:31:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.532 17:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:00.532 ************************************ 00:06:00.532 END TEST accel_decomp 00:06:00.532 ************************************ 00:06:00.532 17:31:21 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.532 17:31:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:00.532 17:31:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.532 17:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:00.532 ************************************ 00:06:00.532 START TEST accel_decmop_full 00:06:00.532 ************************************ 00:06:00.532 17:31:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.532 17:31:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.532 17:31:21 -- accel/accel.sh@17 -- # local accel_module 00:06:00.532 17:31:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.532 17:31:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.532 17:31:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.532 17:31:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.532 17:31:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.532 17:31:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.532 17:31:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.532 17:31:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.532 17:31:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.532 17:31:21 -- accel/accel.sh@42 -- # jq -r . 00:06:00.532 [2024-07-24 17:31:21.887424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:00.532 [2024-07-24 17:31:21.887500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445626 ] 00:06:00.532 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.532 [2024-07-24 17:31:21.941807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.532 [2024-07-24 17:31:22.010959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.910 17:31:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:01.910 00:06:01.910 SPDK Configuration: 00:06:01.910 Core mask: 0x1 00:06:01.910 00:06:01.910 Accel Perf Configuration: 00:06:01.910 Workload Type: decompress 00:06:01.910 Transfer size: 111250 bytes 00:06:01.910 Vector count 1 00:06:01.910 Module: software 00:06:01.910 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.910 Queue depth: 32 00:06:01.910 Allocate depth: 32 00:06:01.910 # threads/core: 1 00:06:01.910 Run time: 1 seconds 00:06:01.910 Verify: Yes 00:06:01.910 00:06:01.910 Running for 1 seconds... 00:06:01.910 00:06:01.910 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.910 ------------------------------------------------------------------------------------ 00:06:01.910 0,0 4832/s 199 MiB/s 0 0 00:06:01.910 ==================================================================================== 00:06:01.910 Total 4832/s 512 MiB/s 0 0' 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:01.910 17:31:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:01.910 17:31:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.910 17:31:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.910 17:31:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.910 17:31:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.910 17:31:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.910 17:31:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.910 17:31:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.910 17:31:23 -- accel/accel.sh@42 -- # jq -r . 00:06:01.910 [2024-07-24 17:31:23.245185] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.910 [2024-07-24 17:31:23.245242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445867 ] 00:06:01.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.910 [2024-07-24 17:31:23.298142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.910 [2024-07-24 17:31:23.366432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val=0x1 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.910 17:31:23 -- accel/accel.sh@21 -- # val=decompress 00:06:01.910 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.910 17:31:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:01.910 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=software 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=32 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=32 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=1 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val=Yes 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:01.911 17:31:23 -- accel/accel.sh@21 -- # val= 00:06:01.911 17:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # IFS=: 00:06:01.911 17:31:23 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@21 -- # val= 00:06:03.288 17:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # IFS=: 00:06:03.288 17:31:24 -- accel/accel.sh@20 -- # read -r var val 00:06:03.288 17:31:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.288 17:31:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:03.288 17:31:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.288 00:06:03.288 real 0m2.719s 00:06:03.288 user 0m2.499s 00:06:03.288 sys 0m0.228s 00:06:03.288 17:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.288 17:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:03.288 ************************************ 00:06:03.288 END TEST accel_decmop_full 00:06:03.288 ************************************ 00:06:03.288 17:31:24 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.288 17:31:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:03.288 17:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.288 17:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:03.288 ************************************ 00:06:03.288 START TEST accel_decomp_mcore 00:06:03.288 ************************************ 00:06:03.288 17:31:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.288 17:31:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.288 17:31:24 -- accel/accel.sh@17 -- # local accel_module 00:06:03.288 17:31:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.288 17:31:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.288 17:31:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.288 17:31:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.288 17:31:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.288 17:31:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.288 17:31:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.288 17:31:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.288 17:31:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.288 17:31:24 -- accel/accel.sh@42 -- # jq -r . 00:06:03.288 [2024-07-24 17:31:24.640795] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.288 [2024-07-24 17:31:24.640851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446114 ] 00:06:03.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.288 [2024-07-24 17:31:24.693988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.288 [2024-07-24 17:31:24.764901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.288 [2024-07-24 17:31:24.764999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.288 [2024-07-24 17:31:24.765097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.288 [2024-07-24 17:31:24.765099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.696 17:31:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:04.696 00:06:04.696 SPDK Configuration: 00:06:04.696 Core mask: 0xf 00:06:04.696 00:06:04.696 Accel Perf Configuration: 00:06:04.696 Workload Type: decompress 00:06:04.696 Transfer size: 4096 bytes 00:06:04.696 Vector count 1 00:06:04.696 Module: software 00:06:04.696 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.696 Queue depth: 32 00:06:04.696 Allocate depth: 32 00:06:04.696 # threads/core: 1 00:06:04.696 Run time: 1 seconds 00:06:04.696 Verify: Yes 00:06:04.696 00:06:04.696 Running for 1 seconds... 00:06:04.696 00:06:04.696 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.696 ------------------------------------------------------------------------------------ 00:06:04.696 0,0 59648/s 109 MiB/s 0 0 00:06:04.696 3,0 61536/s 113 MiB/s 0 0 00:06:04.696 2,0 61664/s 113 MiB/s 0 0 00:06:04.696 1,0 61568/s 113 MiB/s 0 0 00:06:04.696 ==================================================================================== 00:06:04.696 Total 244416/s 954 MiB/s 0 0' 00:06:04.696 17:31:25 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:25 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.696 17:31:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.696 17:31:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.696 17:31:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.696 17:31:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.696 17:31:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.696 17:31:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.696 17:31:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.696 17:31:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.696 17:31:25 -- accel/accel.sh@42 -- # jq -r . 00:06:04.696 [2024-07-24 17:31:25.999113] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:04.696 [2024-07-24 17:31:25.999184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446353 ] 00:06:04.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.696 [2024-07-24 17:31:26.056194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.696 [2024-07-24 17:31:26.127135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.696 [2024-07-24 17:31:26.127231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.696 [2024-07-24 17:31:26.127315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.696 [2024-07-24 17:31:26.127316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val=0xf 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.696 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.696 17:31:26 -- accel/accel.sh@21 -- # val=decompress 00:06:04.696 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.696 17:31:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=software 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=32 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=32 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=1 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val=Yes 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:04.697 17:31:26 -- accel/accel.sh@21 -- # val= 00:06:04.697 17:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # IFS=: 00:06:04.697 17:31:26 -- accel/accel.sh@20 -- # read -r var val 00:06:06.077 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.077 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.077 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.077 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.077 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.077 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@21 -- # val= 00:06:06.078 17:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:06.078 17:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:06.078 17:31:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.078 17:31:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:06.078 17:31:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.078 00:06:06.078 real 0m2.726s 00:06:06.078 user 0m9.153s 00:06:06.078 sys 0m0.240s 00:06:06.078 17:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.078 17:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.078 ************************************ 00:06:06.078 END TEST accel_decomp_mcore 00:06:06.078 ************************************ 00:06:06.078 17:31:27 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.078 17:31:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:06.078 17:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.078 17:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.078 ************************************ 00:06:06.078 START TEST accel_decomp_full_mcore 00:06:06.078 ************************************ 00:06:06.078 17:31:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.078 17:31:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.078 17:31:27 -- accel/accel.sh@17 -- # local accel_module 00:06:06.078 17:31:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.078 17:31:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.078 17:31:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.078 17:31:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.078 17:31:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.078 17:31:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.078 17:31:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.078 17:31:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.078 17:31:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.078 17:31:27 -- accel/accel.sh@42 -- # jq -r . 00:06:06.078 [2024-07-24 17:31:27.397278] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.078 [2024-07-24 17:31:27.397335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446612 ] 00:06:06.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.078 [2024-07-24 17:31:27.444750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.078 [2024-07-24 17:31:27.516269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.078 [2024-07-24 17:31:27.516366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.078 [2024-07-24 17:31:27.516450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.078 [2024-07-24 17:31:27.516452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.459 17:31:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:07.459 00:06:07.459 SPDK Configuration: 00:06:07.459 Core mask: 0xf 00:06:07.459 00:06:07.459 Accel Perf Configuration: 00:06:07.459 Workload Type: decompress 00:06:07.459 Transfer size: 111250 bytes 00:06:07.459 Vector count 1 00:06:07.459 Module: software 00:06:07.460 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.460 Queue depth: 32 00:06:07.460 Allocate depth: 32 00:06:07.460 # threads/core: 1 00:06:07.460 Run time: 1 seconds 00:06:07.460 Verify: Yes 00:06:07.460 00:06:07.460 Running for 1 seconds... 00:06:07.460 00:06:07.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.460 ------------------------------------------------------------------------------------ 00:06:07.460 0,0 4512/s 186 MiB/s 0 0 00:06:07.460 3,0 4672/s 192 MiB/s 0 0 00:06:07.460 2,0 4672/s 192 MiB/s 0 0 00:06:07.460 1,0 4672/s 192 MiB/s 0 0 00:06:07.460 ==================================================================================== 00:06:07.460 Total 18528/s 1965 MiB/s 0 0' 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.460 17:31:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.460 17:31:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.460 17:31:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.460 17:31:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.460 17:31:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.460 17:31:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.460 17:31:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.460 17:31:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.460 17:31:28 -- accel/accel.sh@42 -- # jq -r . 00:06:07.460 [2024-07-24 17:31:28.761022] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.460 [2024-07-24 17:31:28.761116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446849 ] 00:06:07.460 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.460 [2024-07-24 17:31:28.816002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.460 [2024-07-24 17:31:28.885787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.460 [2024-07-24 17:31:28.885884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.460 [2024-07-24 17:31:28.885943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.460 [2024-07-24 17:31:28.885945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=0xf 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=decompress 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=software 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=32 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=32 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=1 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val=Yes 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:07.460 17:31:28 -- accel/accel.sh@21 -- # val= 00:06:07.460 17:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:07.460 17:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.840 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.840 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.840 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.841 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.841 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.841 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.841 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.841 17:31:30 -- accel/accel.sh@21 -- # val= 00:06:08.841 17:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:08.841 17:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:08.841 17:31:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.841 17:31:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:08.841 17:31:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.841 00:06:08.841 real 0m2.734s 00:06:08.841 user 0m9.236s 00:06:08.841 sys 0m0.229s 00:06:08.841 17:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.841 17:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.841 ************************************ 00:06:08.841 END TEST accel_decomp_full_mcore 00:06:08.841 ************************************ 00:06:08.841 17:31:30 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.841 17:31:30 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:08.841 17:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.841 17:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.841 ************************************ 00:06:08.841 START TEST accel_decomp_mthread 00:06:08.841 ************************************ 00:06:08.841 17:31:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.841 17:31:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.841 17:31:30 -- accel/accel.sh@17 -- # local accel_module 00:06:08.841 17:31:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.841 17:31:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.841 17:31:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.841 17:31:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.841 17:31:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.841 17:31:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.841 17:31:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.841 17:31:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.841 17:31:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.841 17:31:30 -- accel/accel.sh@42 -- # jq -r . 00:06:08.841 [2024-07-24 17:31:30.157798] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:08.841 [2024-07-24 17:31:30.157845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447103 ] 00:06:08.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.841 [2024-07-24 17:31:30.210706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.841 [2024-07-24 17:31:30.280038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.222 17:31:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:10.222 00:06:10.222 SPDK Configuration: 00:06:10.222 Core mask: 0x1 00:06:10.222 00:06:10.222 Accel Perf Configuration: 00:06:10.222 Workload Type: decompress 00:06:10.222 Transfer size: 4096 bytes 00:06:10.222 Vector count 1 00:06:10.222 Module: software 00:06:10.222 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.222 Queue depth: 32 00:06:10.222 Allocate depth: 32 00:06:10.222 # threads/core: 2 00:06:10.222 Run time: 1 seconds 00:06:10.222 Verify: Yes 00:06:10.222 00:06:10.222 Running for 1 seconds... 00:06:10.222 00:06:10.222 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.222 ------------------------------------------------------------------------------------ 00:06:10.222 0,1 37376/s 68 MiB/s 0 0 00:06:10.222 0,0 37280/s 68 MiB/s 0 0 00:06:10.222 ==================================================================================== 00:06:10.222 Total 74656/s 291 MiB/s 0 0' 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:10.222 17:31:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:10.222 17:31:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.222 17:31:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.222 17:31:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.222 17:31:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.222 17:31:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.222 17:31:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.222 17:31:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.222 17:31:31 -- accel/accel.sh@42 -- # jq -r . 00:06:10.222 [2024-07-24 17:31:31.505347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:10.222 [2024-07-24 17:31:31.505405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447341 ] 00:06:10.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.222 [2024-07-24 17:31:31.557221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.222 [2024-07-24 17:31:31.625697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=0x1 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=decompress 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=software 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=32 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=32 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val=2 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.222 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.222 17:31:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.222 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.223 17:31:31 -- accel/accel.sh@21 -- # val=Yes 00:06:10.223 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.223 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.223 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:10.223 17:31:31 -- accel/accel.sh@21 -- # val= 00:06:10.223 17:31:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # IFS=: 00:06:10.223 17:31:31 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@21 -- # val= 00:06:11.601 17:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:11.601 17:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:11.601 17:31:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.601 17:31:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:11.601 17:31:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.601 00:06:11.601 real 0m2.688s 00:06:11.601 user 0m2.485s 00:06:11.601 sys 0m0.211s 00:06:11.602 17:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.602 17:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.602 ************************************ 00:06:11.602 END TEST accel_decomp_mthread 00:06:11.602 ************************************ 00:06:11.602 17:31:32 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.602 17:31:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:11.602 17:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.602 17:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.602 ************************************ 00:06:11.602 START TEST accel_deomp_full_mthread 00:06:11.602 ************************************ 00:06:11.602 17:31:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.602 17:31:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.602 17:31:32 -- accel/accel.sh@17 -- # local accel_module 00:06:11.602 17:31:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.602 17:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.602 17:31:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:11.602 17:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.602 17:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.602 17:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.602 17:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.602 17:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.602 17:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.602 17:31:32 -- accel/accel.sh@42 -- # jq -r . 00:06:11.602 [2024-07-24 17:31:32.895701] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:11.602 [2024-07-24 17:31:32.895777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447594 ] 00:06:11.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.602 [2024-07-24 17:31:32.949453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.602 [2024-07-24 17:31:33.018302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.983 17:31:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:12.983 00:06:12.983 SPDK Configuration: 00:06:12.983 Core mask: 0x1 00:06:12.983 00:06:12.983 Accel Perf Configuration: 00:06:12.983 Workload Type: decompress 00:06:12.983 Transfer size: 111250 bytes 00:06:12.983 Vector count 1 00:06:12.983 Module: software 00:06:12.983 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.983 Queue depth: 32 00:06:12.983 Allocate depth: 32 00:06:12.983 # threads/core: 2 00:06:12.983 Run time: 1 seconds 00:06:12.983 Verify: Yes 00:06:12.983 00:06:12.983 Running for 1 seconds... 00:06:12.983 00:06:12.983 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.983 ------------------------------------------------------------------------------------ 00:06:12.983 0,1 2496/s 103 MiB/s 0 0 00:06:12.983 0,0 2464/s 101 MiB/s 0 0 00:06:12.983 ==================================================================================== 00:06:12.983 Total 4960/s 526 MiB/s 0 0' 00:06:12.983 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.983 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.983 17:31:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.983 17:31:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.983 17:31:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.983 17:31:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.983 17:31:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.983 17:31:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.983 17:31:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.983 17:31:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.983 17:31:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.983 17:31:34 -- accel/accel.sh@42 -- # jq -r . 00:06:12.983 [2024-07-24 17:31:34.271087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:12.984 [2024-07-24 17:31:34.271143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447826 ] 00:06:12.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.984 [2024-07-24 17:31:34.324841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.984 [2024-07-24 17:31:34.392680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=0x1 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=decompress 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=software 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=32 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=32 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=2 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val=Yes 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:12.984 17:31:34 -- accel/accel.sh@21 -- # val= 00:06:12.984 17:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:12.984 17:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:14.363 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@21 -- # val= 00:06:14.364 17:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:14.364 17:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:14.364 17:31:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.364 17:31:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:14.364 17:31:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.364 00:06:14.364 real 0m2.754s 00:06:14.364 user 0m2.536s 00:06:14.364 sys 0m0.226s 00:06:14.364 17:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.364 17:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.364 ************************************ 00:06:14.364 END TEST accel_deomp_full_mthread 00:06:14.364 ************************************ 00:06:14.364 17:31:35 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:14.364 17:31:35 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:14.364 17:31:35 -- accel/accel.sh@129 -- # build_accel_config 00:06:14.364 17:31:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:14.364 17:31:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.364 17:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.364 17:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.364 17:31:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.364 17:31:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.364 17:31:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.364 17:31:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.364 17:31:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.364 17:31:35 -- accel/accel.sh@42 -- # jq -r . 00:06:14.364 ************************************ 00:06:14.364 START TEST accel_dif_functional_tests 00:06:14.364 ************************************ 00:06:14.364 17:31:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:14.364 [2024-07-24 17:31:35.698665] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:14.364 [2024-07-24 17:31:35.698711] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448082 ] 00:06:14.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.364 [2024-07-24 17:31:35.749527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.364 [2024-07-24 17:31:35.819791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.364 [2024-07-24 17:31:35.819890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.364 [2024-07-24 17:31:35.819892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.364 00:06:14.364 00:06:14.364 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.364 http://cunit.sourceforge.net/ 00:06:14.364 00:06:14.364 00:06:14.364 Suite: accel_dif 00:06:14.364 Test: verify: DIF generated, GUARD check ...passed 00:06:14.364 Test: verify: DIF generated, APPTAG check ...passed 00:06:14.364 Test: verify: DIF generated, REFTAG check ...passed 00:06:14.364 Test: verify: DIF not generated, GUARD check ...[2024-07-24 17:31:35.888018] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:14.364 [2024-07-24 17:31:35.888067] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:14.364 passed 00:06:14.364 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 17:31:35.888098] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:14.364 [2024-07-24 17:31:35.888113] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:14.364 passed 00:06:14.364 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 17:31:35.888129] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:14.364 [2024-07-24 17:31:35.888145] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:14.364 passed 00:06:14.364 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:14.364 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 17:31:35.888185] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:14.364 passed 00:06:14.364 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:14.364 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:14.364 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:14.364 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 17:31:35.888283] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:14.364 passed 00:06:14.364 Test: generate copy: DIF generated, GUARD check ...passed 00:06:14.364 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:14.364 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:14.364 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:14.364 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:14.364 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:14.364 Test: generate copy: iovecs-len validate ...[2024-07-24 17:31:35.888444] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:14.364 passed 00:06:14.364 Test: generate copy: buffer alignment validate ...passed 00:06:14.364 00:06:14.364 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.364 suites 1 1 n/a 0 0 00:06:14.364 tests 20 20 20 0 0 00:06:14.364 asserts 204 204 204 0 n/a 00:06:14.364 00:06:14.364 Elapsed time = 0.002 seconds 00:06:14.624 00:06:14.624 real 0m0.420s 00:06:14.624 user 0m0.630s 00:06:14.624 sys 0m0.145s 00:06:14.624 17:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.624 17:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.624 ************************************ 00:06:14.624 END TEST accel_dif_functional_tests 00:06:14.624 ************************************ 00:06:14.624 00:06:14.624 real 0m57.485s 00:06:14.624 user 1m6.184s 00:06:14.624 sys 0m5.920s 00:06:14.624 17:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.624 17:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.624 ************************************ 00:06:14.624 END TEST accel 00:06:14.624 ************************************ 00:06:14.624 17:31:36 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:14.624 17:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.624 17:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.624 17:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.624 ************************************ 00:06:14.624 START TEST accel_rpc 00:06:14.624 ************************************ 00:06:14.624 17:31:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:14.624 * Looking for test storage... 00:06:14.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:14.884 17:31:36 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.884 17:31:36 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=448143 00:06:14.884 17:31:36 -- accel/accel_rpc.sh@15 -- # waitforlisten 448143 00:06:14.884 17:31:36 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:14.884 17:31:36 -- common/autotest_common.sh@819 -- # '[' -z 448143 ']' 00:06:14.884 17:31:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.884 17:31:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.884 17:31:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.884 17:31:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.884 17:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.884 [2024-07-24 17:31:36.270207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:14.884 [2024-07-24 17:31:36.270252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448143 ] 00:06:14.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.884 [2024-07-24 17:31:36.324892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.884 [2024-07-24 17:31:36.404492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.884 [2024-07-24 17:31:36.404600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.451 17:31:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.451 17:31:37 -- common/autotest_common.sh@852 -- # return 0 00:06:15.451 17:31:37 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:15.451 17:31:37 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:15.710 17:31:37 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:15.710 17:31:37 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:15.710 17:31:37 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:15.710 17:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.710 17:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.710 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.710 ************************************ 00:06:15.710 START TEST accel_assign_opcode 00:06:15.710 ************************************ 00:06:15.710 17:31:37 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:15.711 17:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.711 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 [2024-07-24 17:31:37.058537] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:15.711 17:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:15.711 17:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.711 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 [2024-07-24 17:31:37.066551] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:15.711 17:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:15.711 17:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.711 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 17:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:15.711 17:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.711 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@42 -- # grep software 00:06:15.711 17:31:37 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:15.711 17:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.711 software 00:06:15.711 00:06:15.711 real 0m0.229s 00:06:15.711 user 0m0.041s 00:06:15.711 sys 0m0.008s 00:06:15.711 17:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.711 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 ************************************ 00:06:15.711 END TEST accel_assign_opcode 00:06:15.711 ************************************ 00:06:15.971 17:31:37 -- accel/accel_rpc.sh@55 -- # killprocess 448143 00:06:15.971 17:31:37 -- common/autotest_common.sh@926 -- # '[' -z 448143 ']' 00:06:15.971 17:31:37 -- common/autotest_common.sh@930 -- # kill -0 448143 00:06:15.971 17:31:37 -- common/autotest_common.sh@931 -- # uname 00:06:15.971 17:31:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.971 17:31:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448143 00:06:15.971 17:31:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.971 17:31:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.971 17:31:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448143' 00:06:15.971 killing process with pid 448143 00:06:15.971 17:31:37 -- common/autotest_common.sh@945 -- # kill 448143 00:06:15.971 17:31:37 -- common/autotest_common.sh@950 -- # wait 448143 00:06:16.230 00:06:16.230 real 0m1.547s 00:06:16.230 user 0m1.626s 00:06:16.230 sys 0m0.364s 00:06:16.230 17:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.230 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.230 ************************************ 00:06:16.230 END TEST accel_rpc 00:06:16.230 ************************************ 00:06:16.230 17:31:37 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:16.230 17:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.230 17:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.230 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.230 ************************************ 00:06:16.230 START TEST app_cmdline 00:06:16.230 ************************************ 00:06:16.230 17:31:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:16.230 * Looking for test storage... 00:06:16.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:16.230 17:31:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.230 17:31:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=448523 00:06:16.230 17:31:37 -- app/cmdline.sh@18 -- # waitforlisten 448523 00:06:16.230 17:31:37 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.230 17:31:37 -- common/autotest_common.sh@819 -- # '[' -z 448523 ']' 00:06:16.230 17:31:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.230 17:31:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.230 17:31:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.230 17:31:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.230 17:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.490 [2024-07-24 17:31:37.864031] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:16.490 [2024-07-24 17:31:37.864099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448523 ] 00:06:16.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.490 [2024-07-24 17:31:37.918931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.490 [2024-07-24 17:31:37.993544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.490 [2024-07-24 17:31:37.993667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.058 17:31:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.058 17:31:38 -- common/autotest_common.sh@852 -- # return 0 00:06:17.058 17:31:38 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:17.317 { 00:06:17.317 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:06:17.317 "fields": { 00:06:17.317 "major": 24, 00:06:17.317 "minor": 1, 00:06:17.317 "patch": 1, 00:06:17.317 "suffix": "-pre", 00:06:17.317 "commit": "dbef7efac" 00:06:17.317 } 00:06:17.317 } 00:06:17.317 17:31:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.317 17:31:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.317 17:31:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.317 17:31:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.317 17:31:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.317 17:31:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.317 17:31:38 -- app/cmdline.sh@26 -- # sort 00:06:17.317 17:31:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:17.317 17:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.317 17:31:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:17.317 17:31:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.317 17:31:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.317 17:31:38 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.317 17:31:38 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.317 17:31:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.317 17:31:38 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.317 17:31:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.317 17:31:38 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.317 17:31:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.317 17:31:38 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.317 17:31:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.317 17:31:38 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.317 17:31:38 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:17.317 17:31:38 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.576 request: 00:06:17.576 { 00:06:17.576 "method": "env_dpdk_get_mem_stats", 00:06:17.576 "req_id": 1 00:06:17.576 } 00:06:17.576 Got JSON-RPC error response 00:06:17.576 response: 00:06:17.576 { 00:06:17.576 "code": -32601, 00:06:17.576 "message": "Method not found" 00:06:17.576 } 00:06:17.576 17:31:39 -- common/autotest_common.sh@643 -- # es=1 00:06:17.576 17:31:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.576 17:31:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.576 17:31:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.576 17:31:39 -- app/cmdline.sh@1 -- # killprocess 448523 00:06:17.576 17:31:39 -- common/autotest_common.sh@926 -- # '[' -z 448523 ']' 00:06:17.576 17:31:39 -- common/autotest_common.sh@930 -- # kill -0 448523 00:06:17.576 17:31:39 -- common/autotest_common.sh@931 -- # uname 00:06:17.576 17:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.576 17:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448523 00:06:17.576 17:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.576 17:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.576 17:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448523' 00:06:17.576 killing process with pid 448523 00:06:17.576 17:31:39 -- common/autotest_common.sh@945 -- # kill 448523 00:06:17.576 17:31:39 -- common/autotest_common.sh@950 -- # wait 448523 00:06:17.834 00:06:17.834 real 0m1.665s 00:06:17.834 user 0m1.958s 00:06:17.834 sys 0m0.427s 00:06:17.834 17:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.834 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.834 ************************************ 00:06:17.834 END TEST app_cmdline 00:06:17.834 ************************************ 00:06:18.094 17:31:39 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.094 17:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.094 17:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.094 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 START TEST version 00:06:18.094 ************************************ 00:06:18.094 17:31:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.094 * Looking for test storage... 00:06:18.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.094 17:31:39 -- app/version.sh@17 -- # get_header_version major 00:06:18.094 17:31:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.094 17:31:39 -- app/version.sh@14 -- # cut -f2 00:06:18.094 17:31:39 -- app/version.sh@14 -- # tr -d '"' 00:06:18.094 17:31:39 -- app/version.sh@17 -- # major=24 00:06:18.094 17:31:39 -- app/version.sh@18 -- # get_header_version minor 00:06:18.094 17:31:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.094 17:31:39 -- app/version.sh@14 -- # cut -f2 00:06:18.094 17:31:39 -- app/version.sh@14 -- # tr -d '"' 00:06:18.094 17:31:39 -- app/version.sh@18 -- # minor=1 00:06:18.094 17:31:39 -- app/version.sh@19 -- # get_header_version patch 00:06:18.094 17:31:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.094 17:31:39 -- app/version.sh@14 -- # cut -f2 00:06:18.094 17:31:39 -- app/version.sh@14 -- # tr -d '"' 00:06:18.094 17:31:39 -- app/version.sh@19 -- # patch=1 00:06:18.094 17:31:39 -- app/version.sh@20 -- # get_header_version suffix 00:06:18.094 17:31:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.094 17:31:39 -- app/version.sh@14 -- # cut -f2 00:06:18.094 17:31:39 -- app/version.sh@14 -- # tr -d '"' 00:06:18.094 17:31:39 -- app/version.sh@20 -- # suffix=-pre 00:06:18.094 17:31:39 -- app/version.sh@22 -- # version=24.1 00:06:18.094 17:31:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.094 17:31:39 -- app/version.sh@25 -- # version=24.1.1 00:06:18.094 17:31:39 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:18.094 17:31:39 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.094 17:31:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.094 17:31:39 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:18.094 17:31:39 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:18.094 00:06:18.094 real 0m0.151s 00:06:18.094 user 0m0.092s 00:06:18.094 sys 0m0.094s 00:06:18.094 17:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.094 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 END TEST version 00:06:18.094 ************************************ 00:06:18.094 17:31:39 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@204 -- # uname -s 00:06:18.094 17:31:39 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:18.094 17:31:39 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:18.094 17:31:39 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:18.094 17:31:39 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:18.094 17:31:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:18.094 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 17:31:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:18.094 17:31:39 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:18.094 17:31:39 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.094 17:31:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:18.094 17:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.094 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 START TEST nvmf_tcp 00:06:18.094 ************************************ 00:06:18.094 17:31:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.354 * Looking for test storage... 00:06:18.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:18.354 17:31:39 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:18.354 17:31:39 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.354 17:31:39 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.354 17:31:39 -- nvmf/common.sh@7 -- # uname -s 00:06:18.354 17:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.354 17:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.354 17:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.354 17:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.354 17:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.354 17:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.354 17:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.354 17:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.354 17:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.354 17:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.354 17:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.354 17:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.354 17:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.354 17:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.354 17:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.354 17:31:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.354 17:31:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.354 17:31:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.354 17:31:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.354 17:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.354 17:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.354 17:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.354 17:31:39 -- paths/export.sh@5 -- # export PATH 00:06:18.355 17:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.355 17:31:39 -- nvmf/common.sh@46 -- # : 0 00:06:18.355 17:31:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:18.355 17:31:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:18.355 17:31:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.355 17:31:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.355 17:31:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:18.355 17:31:39 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:18.355 17:31:39 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:18.355 17:31:39 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:18.355 17:31:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:18.355 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.355 17:31:39 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:18.355 17:31:39 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:18.355 17:31:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:18.355 17:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.355 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.355 ************************************ 00:06:18.355 START TEST nvmf_example 00:06:18.355 ************************************ 00:06:18.355 17:31:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:18.355 * Looking for test storage... 00:06:18.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.355 17:31:39 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.355 17:31:39 -- nvmf/common.sh@7 -- # uname -s 00:06:18.355 17:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.355 17:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.355 17:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.355 17:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.355 17:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.355 17:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.355 17:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.355 17:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.355 17:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.355 17:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.355 17:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.355 17:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.355 17:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.355 17:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.355 17:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.355 17:31:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.355 17:31:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.355 17:31:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.355 17:31:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.355 17:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.355 17:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.355 17:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.355 17:31:39 -- paths/export.sh@5 -- # export PATH 00:06:18.355 17:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.355 17:31:39 -- nvmf/common.sh@46 -- # : 0 00:06:18.355 17:31:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:18.355 17:31:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:18.355 17:31:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.355 17:31:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.355 17:31:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:18.355 17:31:39 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:18.355 17:31:39 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:18.355 17:31:39 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:18.355 17:31:39 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:18.355 17:31:39 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:18.355 17:31:39 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:18.355 17:31:39 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:18.355 17:31:39 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:18.355 17:31:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:18.355 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:18.355 17:31:39 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:18.355 17:31:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:18.355 17:31:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.355 17:31:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:18.355 17:31:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:18.355 17:31:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:18.355 17:31:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.355 17:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.355 17:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.355 17:31:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:18.355 17:31:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:18.355 17:31:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:18.355 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.965 17:31:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:24.965 17:31:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:24.965 17:31:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:24.965 17:31:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:24.965 17:31:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:24.965 17:31:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:24.965 17:31:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:24.965 17:31:45 -- nvmf/common.sh@294 -- # net_devs=() 00:06:24.965 17:31:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:24.965 17:31:45 -- nvmf/common.sh@295 -- # e810=() 00:06:24.965 17:31:45 -- nvmf/common.sh@295 -- # local -ga e810 00:06:24.965 17:31:45 -- nvmf/common.sh@296 -- # x722=() 00:06:24.965 17:31:45 -- nvmf/common.sh@296 -- # local -ga x722 00:06:24.965 17:31:45 -- nvmf/common.sh@297 -- # mlx=() 00:06:24.965 17:31:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:24.965 17:31:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.965 17:31:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:24.965 17:31:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:24.965 17:31:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:24.965 17:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:24.965 17:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:24.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:24.965 17:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:24.965 17:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:24.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:24.965 17:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:24.965 17:31:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:24.965 17:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:24.965 17:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.966 17:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:24.966 17:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.966 17:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:24.966 Found net devices under 0000:86:00.0: cvl_0_0 00:06:24.966 17:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.966 17:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:24.966 17:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.966 17:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:24.966 17:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.966 17:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:24.966 Found net devices under 0000:86:00.1: cvl_0_1 00:06:24.966 17:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.966 17:31:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:24.966 17:31:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:24.966 17:31:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:24.966 17:31:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:24.966 17:31:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:24.966 17:31:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.966 17:31:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.966 17:31:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.966 17:31:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:24.966 17:31:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.966 17:31:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.966 17:31:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:24.966 17:31:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.966 17:31:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.966 17:31:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:24.966 17:31:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:24.966 17:31:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.966 17:31:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.966 17:31:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.966 17:31:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.966 17:31:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:24.966 17:31:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.966 17:31:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.966 17:31:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.966 17:31:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:24.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:06:24.966 00:06:24.966 --- 10.0.0.2 ping statistics --- 00:06:24.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.966 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:24.966 17:31:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:06:24.966 00:06:24.966 --- 10.0.0.1 ping statistics --- 00:06:24.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.966 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:24.966 17:31:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.966 17:31:45 -- nvmf/common.sh@410 -- # return 0 00:06:24.966 17:31:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:24.966 17:31:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.966 17:31:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:24.966 17:31:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:24.966 17:31:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.966 17:31:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:24.966 17:31:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:24.966 17:31:45 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:24.966 17:31:45 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:24.966 17:31:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:24.966 17:31:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:45 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:24.966 17:31:45 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:24.966 17:31:45 -- target/nvmf_example.sh@34 -- # nvmfpid=452083 00:06:24.966 17:31:45 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:24.966 17:31:45 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:24.966 17:31:45 -- target/nvmf_example.sh@36 -- # waitforlisten 452083 00:06:24.966 17:31:45 -- common/autotest_common.sh@819 -- # '[' -z 452083 ']' 00:06:24.966 17:31:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.966 17:31:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.966 17:31:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.966 17:31:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.966 17:31:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.966 17:31:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.966 17:31:46 -- common/autotest_common.sh@852 -- # return 0 00:06:24.966 17:31:46 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:24.966 17:31:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.966 17:31:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.966 17:31:46 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:24.966 17:31:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.966 17:31:46 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:24.966 17:31:46 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:24.966 17:31:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.966 17:31:46 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:24.966 17:31:46 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:24.966 17:31:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.966 17:31:46 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.966 17:31:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.966 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 17:31:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.966 17:31:46 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:24.966 17:31:46 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:24.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.179 Initializing NVMe Controllers 00:06:37.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.179 Initialization complete. Launching workers. 00:06:37.179 ======================================================== 00:06:37.179 Latency(us) 00:06:37.179 Device Information : IOPS MiB/s Average min max 00:06:37.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13860.76 54.14 4617.17 688.19 15390.58 00:06:37.179 ======================================================== 00:06:37.179 Total : 13860.76 54.14 4617.17 688.19 15390.58 00:06:37.179 00:06:37.179 17:31:56 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:37.179 17:31:56 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:37.179 17:31:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:37.179 17:31:56 -- nvmf/common.sh@116 -- # sync 00:06:37.179 17:31:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:37.179 17:31:56 -- nvmf/common.sh@119 -- # set +e 00:06:37.179 17:31:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:37.179 17:31:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:37.179 rmmod nvme_tcp 00:06:37.179 rmmod nvme_fabrics 00:06:37.179 rmmod nvme_keyring 00:06:37.179 17:31:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:37.179 17:31:56 -- nvmf/common.sh@123 -- # set -e 00:06:37.179 17:31:56 -- nvmf/common.sh@124 -- # return 0 00:06:37.179 17:31:56 -- nvmf/common.sh@477 -- # '[' -n 452083 ']' 00:06:37.179 17:31:56 -- nvmf/common.sh@478 -- # killprocess 452083 00:06:37.179 17:31:56 -- common/autotest_common.sh@926 -- # '[' -z 452083 ']' 00:06:37.179 17:31:56 -- common/autotest_common.sh@930 -- # kill -0 452083 00:06:37.179 17:31:56 -- common/autotest_common.sh@931 -- # uname 00:06:37.179 17:31:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.179 17:31:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 452083 00:06:37.179 17:31:56 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:37.179 17:31:56 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:37.180 17:31:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 452083' 00:06:37.180 killing process with pid 452083 00:06:37.180 17:31:56 -- common/autotest_common.sh@945 -- # kill 452083 00:06:37.180 17:31:56 -- common/autotest_common.sh@950 -- # wait 452083 00:06:37.180 nvmf threads initialize successfully 00:06:37.180 bdev subsystem init successfully 00:06:37.180 created a nvmf target service 00:06:37.180 create targets's poll groups done 00:06:37.180 all subsystems of target started 00:06:37.180 nvmf target is running 00:06:37.180 all subsystems of target stopped 00:06:37.180 destroy targets's poll groups done 00:06:37.180 destroyed the nvmf target service 00:06:37.180 bdev subsystem finish successfully 00:06:37.180 nvmf threads destroy successfully 00:06:37.180 17:31:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:37.180 17:31:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:37.180 17:31:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:37.180 17:31:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.180 17:31:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:37.180 17:31:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.180 17:31:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.180 17:31:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.440 17:31:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:37.440 17:31:59 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:37.440 17:31:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:37.440 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.709 00:06:37.709 real 0m19.276s 00:06:37.709 user 0m45.697s 00:06:37.709 sys 0m5.532s 00:06:37.709 17:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.709 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.709 ************************************ 00:06:37.709 END TEST nvmf_example 00:06:37.709 ************************************ 00:06:37.709 17:31:59 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:37.709 17:31:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:37.709 17:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.709 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.709 ************************************ 00:06:37.709 START TEST nvmf_filesystem 00:06:37.709 ************************************ 00:06:37.709 17:31:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:37.709 * Looking for test storage... 00:06:37.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.710 17:31:59 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:37.710 17:31:59 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:37.710 17:31:59 -- common/autotest_common.sh@34 -- # set -e 00:06:37.710 17:31:59 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:37.710 17:31:59 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:37.710 17:31:59 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:37.710 17:31:59 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:37.710 17:31:59 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:37.710 17:31:59 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:37.710 17:31:59 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:37.710 17:31:59 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:37.710 17:31:59 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:37.710 17:31:59 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:37.710 17:31:59 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:37.710 17:31:59 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:37.710 17:31:59 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:37.710 17:31:59 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:37.710 17:31:59 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:37.710 17:31:59 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:37.710 17:31:59 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:37.710 17:31:59 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:37.710 17:31:59 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:37.710 17:31:59 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:37.710 17:31:59 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:37.710 17:31:59 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:37.710 17:31:59 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:37.710 17:31:59 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:37.710 17:31:59 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:37.710 17:31:59 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:37.710 17:31:59 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:37.710 17:31:59 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:37.710 17:31:59 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:37.710 17:31:59 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:37.710 17:31:59 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:37.710 17:31:59 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:37.710 17:31:59 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:37.710 17:31:59 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:37.710 17:31:59 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:37.710 17:31:59 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:37.710 17:31:59 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:37.710 17:31:59 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:37.710 17:31:59 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:37.710 17:31:59 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:37.710 17:31:59 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:37.710 17:31:59 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:37.710 17:31:59 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:37.710 17:31:59 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:37.710 17:31:59 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:37.710 17:31:59 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:37.710 17:31:59 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:37.710 17:31:59 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:37.710 17:31:59 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:37.710 17:31:59 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:37.710 17:31:59 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:37.710 17:31:59 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:37.710 17:31:59 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:37.710 17:31:59 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:37.710 17:31:59 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:37.710 17:31:59 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:37.710 17:31:59 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:37.710 17:31:59 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:37.710 17:31:59 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:37.710 17:31:59 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:37.710 17:31:59 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:37.710 17:31:59 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:37.710 17:31:59 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:37.710 17:31:59 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:37.710 17:31:59 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:37.710 17:31:59 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:37.710 17:31:59 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:37.710 17:31:59 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:37.710 17:31:59 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:37.710 17:31:59 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:37.710 17:31:59 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:37.710 17:31:59 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:37.710 17:31:59 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:37.710 17:31:59 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:37.710 17:31:59 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:37.710 17:31:59 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:37.710 17:31:59 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:37.710 17:31:59 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:37.710 17:31:59 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:37.710 17:31:59 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:37.710 17:31:59 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:37.710 17:31:59 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.710 17:31:59 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:37.710 17:31:59 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:37.710 17:31:59 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:37.710 17:31:59 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:37.710 17:31:59 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:37.710 17:31:59 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:37.710 17:31:59 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:37.710 17:31:59 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:37.710 17:31:59 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:37.710 17:31:59 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:37.710 17:31:59 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:37.710 #define SPDK_CONFIG_H 00:06:37.710 #define SPDK_CONFIG_APPS 1 00:06:37.710 #define SPDK_CONFIG_ARCH native 00:06:37.710 #undef SPDK_CONFIG_ASAN 00:06:37.710 #undef SPDK_CONFIG_AVAHI 00:06:37.710 #undef SPDK_CONFIG_CET 00:06:37.710 #define SPDK_CONFIG_COVERAGE 1 00:06:37.710 #define SPDK_CONFIG_CROSS_PREFIX 00:06:37.710 #undef SPDK_CONFIG_CRYPTO 00:06:37.710 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:37.710 #undef SPDK_CONFIG_CUSTOMOCF 00:06:37.710 #undef SPDK_CONFIG_DAOS 00:06:37.710 #define SPDK_CONFIG_DAOS_DIR 00:06:37.710 #define SPDK_CONFIG_DEBUG 1 00:06:37.710 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:37.710 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:37.710 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:37.710 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:37.710 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:37.710 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:37.710 #define SPDK_CONFIG_EXAMPLES 1 00:06:37.710 #undef SPDK_CONFIG_FC 00:06:37.710 #define SPDK_CONFIG_FC_PATH 00:06:37.710 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:37.710 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:37.710 #undef SPDK_CONFIG_FUSE 00:06:37.710 #undef SPDK_CONFIG_FUZZER 00:06:37.710 #define SPDK_CONFIG_FUZZER_LIB 00:06:37.710 #undef SPDK_CONFIG_GOLANG 00:06:37.710 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:37.710 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:37.710 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:37.710 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:37.710 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:37.710 #define SPDK_CONFIG_IDXD 1 00:06:37.710 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:37.710 #undef SPDK_CONFIG_IPSEC_MB 00:06:37.710 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:37.710 #define SPDK_CONFIG_ISAL 1 00:06:37.710 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:37.710 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:37.710 #define SPDK_CONFIG_LIBDIR 00:06:37.710 #undef SPDK_CONFIG_LTO 00:06:37.710 #define SPDK_CONFIG_MAX_LCORES 00:06:37.710 #define SPDK_CONFIG_NVME_CUSE 1 00:06:37.710 #undef SPDK_CONFIG_OCF 00:06:37.710 #define SPDK_CONFIG_OCF_PATH 00:06:37.710 #define SPDK_CONFIG_OPENSSL_PATH 00:06:37.710 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:37.710 #undef SPDK_CONFIG_PGO_USE 00:06:37.710 #define SPDK_CONFIG_PREFIX /usr/local 00:06:37.710 #undef SPDK_CONFIG_RAID5F 00:06:37.710 #undef SPDK_CONFIG_RBD 00:06:37.710 #define SPDK_CONFIG_RDMA 1 00:06:37.710 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:37.710 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:37.711 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:37.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:37.711 #define SPDK_CONFIG_SHARED 1 00:06:37.711 #undef SPDK_CONFIG_SMA 00:06:37.711 #define SPDK_CONFIG_TESTS 1 00:06:37.711 #undef SPDK_CONFIG_TSAN 00:06:37.711 #define SPDK_CONFIG_UBLK 1 00:06:37.711 #define SPDK_CONFIG_UBSAN 1 00:06:37.711 #undef SPDK_CONFIG_UNIT_TESTS 00:06:37.711 #undef SPDK_CONFIG_URING 00:06:37.711 #define SPDK_CONFIG_URING_PATH 00:06:37.711 #undef SPDK_CONFIG_URING_ZNS 00:06:37.711 #undef SPDK_CONFIG_USDT 00:06:37.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:37.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:37.711 #undef SPDK_CONFIG_VFIO_USER 00:06:37.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:37.711 #define SPDK_CONFIG_VHOST 1 00:06:37.711 #define SPDK_CONFIG_VIRTIO 1 00:06:37.711 #undef SPDK_CONFIG_VTUNE 00:06:37.711 #define SPDK_CONFIG_VTUNE_DIR 00:06:37.711 #define SPDK_CONFIG_WERROR 1 00:06:37.711 #define SPDK_CONFIG_WPDK_DIR 00:06:37.711 #undef SPDK_CONFIG_XNVME 00:06:37.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:37.711 17:31:59 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:37.711 17:31:59 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.711 17:31:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.711 17:31:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.711 17:31:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.711 17:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.711 17:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.711 17:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.711 17:31:59 -- paths/export.sh@5 -- # export PATH 00:06:37.711 17:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.711 17:31:59 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:37.711 17:31:59 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:37.711 17:31:59 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:37.711 17:31:59 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:37.711 17:31:59 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:37.711 17:31:59 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.711 17:31:59 -- pm/common@16 -- # TEST_TAG=N/A 00:06:37.711 17:31:59 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:37.711 17:31:59 -- common/autotest_common.sh@52 -- # : 1 00:06:37.711 17:31:59 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:37.711 17:31:59 -- common/autotest_common.sh@56 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:37.711 17:31:59 -- common/autotest_common.sh@58 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:37.711 17:31:59 -- common/autotest_common.sh@60 -- # : 1 00:06:37.711 17:31:59 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:37.711 17:31:59 -- common/autotest_common.sh@62 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:37.711 17:31:59 -- common/autotest_common.sh@64 -- # : 00:06:37.711 17:31:59 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:37.711 17:31:59 -- common/autotest_common.sh@66 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:37.711 17:31:59 -- common/autotest_common.sh@68 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:37.711 17:31:59 -- common/autotest_common.sh@70 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:37.711 17:31:59 -- common/autotest_common.sh@72 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:37.711 17:31:59 -- common/autotest_common.sh@74 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:37.711 17:31:59 -- common/autotest_common.sh@76 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:37.711 17:31:59 -- common/autotest_common.sh@78 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:37.711 17:31:59 -- common/autotest_common.sh@80 -- # : 1 00:06:37.711 17:31:59 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:37.711 17:31:59 -- common/autotest_common.sh@82 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:37.711 17:31:59 -- common/autotest_common.sh@84 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:37.711 17:31:59 -- common/autotest_common.sh@86 -- # : 1 00:06:37.711 17:31:59 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:37.711 17:31:59 -- common/autotest_common.sh@88 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:37.711 17:31:59 -- common/autotest_common.sh@90 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:37.711 17:31:59 -- common/autotest_common.sh@92 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:37.711 17:31:59 -- common/autotest_common.sh@94 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:37.711 17:31:59 -- common/autotest_common.sh@96 -- # : tcp 00:06:37.711 17:31:59 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:37.711 17:31:59 -- common/autotest_common.sh@98 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:37.711 17:31:59 -- common/autotest_common.sh@100 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:37.711 17:31:59 -- common/autotest_common.sh@102 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:37.711 17:31:59 -- common/autotest_common.sh@104 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:37.711 17:31:59 -- common/autotest_common.sh@106 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:37.711 17:31:59 -- common/autotest_common.sh@108 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:37.711 17:31:59 -- common/autotest_common.sh@110 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:37.711 17:31:59 -- common/autotest_common.sh@112 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:37.711 17:31:59 -- common/autotest_common.sh@114 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:37.711 17:31:59 -- common/autotest_common.sh@116 -- # : 1 00:06:37.711 17:31:59 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:37.711 17:31:59 -- common/autotest_common.sh@118 -- # : 00:06:37.711 17:31:59 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:37.711 17:31:59 -- common/autotest_common.sh@120 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:37.711 17:31:59 -- common/autotest_common.sh@122 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:37.711 17:31:59 -- common/autotest_common.sh@124 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:37.711 17:31:59 -- common/autotest_common.sh@126 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:37.711 17:31:59 -- common/autotest_common.sh@128 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:37.711 17:31:59 -- common/autotest_common.sh@130 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:37.711 17:31:59 -- common/autotest_common.sh@132 -- # : 00:06:37.711 17:31:59 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:37.711 17:31:59 -- common/autotest_common.sh@134 -- # : true 00:06:37.711 17:31:59 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:37.711 17:31:59 -- common/autotest_common.sh@136 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:37.711 17:31:59 -- common/autotest_common.sh@138 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:37.711 17:31:59 -- common/autotest_common.sh@140 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:37.711 17:31:59 -- common/autotest_common.sh@142 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:37.711 17:31:59 -- common/autotest_common.sh@144 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:37.711 17:31:59 -- common/autotest_common.sh@146 -- # : 0 00:06:37.711 17:31:59 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:37.711 17:31:59 -- common/autotest_common.sh@148 -- # : e810 00:06:37.712 17:31:59 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:37.712 17:31:59 -- common/autotest_common.sh@150 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:37.712 17:31:59 -- common/autotest_common.sh@152 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:37.712 17:31:59 -- common/autotest_common.sh@154 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:37.712 17:31:59 -- common/autotest_common.sh@156 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:37.712 17:31:59 -- common/autotest_common.sh@158 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:37.712 17:31:59 -- common/autotest_common.sh@160 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:37.712 17:31:59 -- common/autotest_common.sh@163 -- # : 00:06:37.712 17:31:59 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:37.712 17:31:59 -- common/autotest_common.sh@165 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:37.712 17:31:59 -- common/autotest_common.sh@167 -- # : 0 00:06:37.712 17:31:59 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:37.712 17:31:59 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:37.712 17:31:59 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.712 17:31:59 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.712 17:31:59 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:37.712 17:31:59 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:37.712 17:31:59 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:37.712 17:31:59 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:37.712 17:31:59 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:37.712 17:31:59 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:37.712 17:31:59 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:37.712 17:31:59 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:37.712 17:31:59 -- common/autotest_common.sh@196 -- # cat 00:06:37.712 17:31:59 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:37.712 17:31:59 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:37.712 17:31:59 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:37.712 17:31:59 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:37.712 17:31:59 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:37.712 17:31:59 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:37.712 17:31:59 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:37.712 17:31:59 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:37.712 17:31:59 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:37.712 17:31:59 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:37.712 17:31:59 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:37.712 17:31:59 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:37.712 17:31:59 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:37.712 17:31:59 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:37.712 17:31:59 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:37.712 17:31:59 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:37.712 17:31:59 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:37.712 17:31:59 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:37.712 17:31:59 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:37.712 17:31:59 -- common/autotest_common.sh@249 -- # valgrind= 00:06:37.712 17:31:59 -- common/autotest_common.sh@255 -- # uname -s 00:06:37.712 17:31:59 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:37.712 17:31:59 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:37.712 17:31:59 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:37.712 17:31:59 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:37.712 17:31:59 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:06:37.712 17:31:59 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:37.712 17:31:59 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:37.712 17:31:59 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:37.712 17:31:59 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:37.712 17:31:59 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:37.712 17:31:59 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:37.712 17:31:59 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:37.712 17:31:59 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:37.712 17:31:59 -- common/autotest_common.sh@309 -- # [[ -z 454529 ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@309 -- # kill -0 454529 00:06:37.712 17:31:59 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:37.712 17:31:59 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:37.712 17:31:59 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:37.712 17:31:59 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:37.712 17:31:59 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:37.712 17:31:59 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:37.712 17:31:59 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:37.712 17:31:59 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.O6mmV4 00:06:37.712 17:31:59 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:37.712 17:31:59 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:37.712 17:31:59 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.O6mmV4/tests/target /tmp/spdk.O6mmV4 00:06:37.712 17:31:59 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:37.712 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.712 17:31:59 -- common/autotest_common.sh@318 -- # df -T 00:06:37.712 17:31:59 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:37.712 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:37.712 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:37.712 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:37.712 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:37.712 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:37.712 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.712 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:37.712 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:37.712 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=950202368 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=4334227456 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=185252831232 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=195974283264 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=10721452032 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=97933623296 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=39185477632 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=39194857472 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=9379840 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=97984618496 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=2523136 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # avails["$mount"]=19597422592 00:06:37.713 17:31:59 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19597426688 00:06:37.713 17:31:59 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:37.713 17:31:59 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:37.713 17:31:59 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:37.713 * Looking for test storage... 00:06:37.713 17:31:59 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:37.713 17:31:59 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:37.713 17:31:59 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.713 17:31:59 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:37.973 17:31:59 -- common/autotest_common.sh@363 -- # mount=/ 00:06:37.973 17:31:59 -- common/autotest_common.sh@365 -- # target_space=185252831232 00:06:37.973 17:31:59 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:37.973 17:31:59 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:37.973 17:31:59 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:37.973 17:31:59 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:37.973 17:31:59 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:37.973 17:31:59 -- common/autotest_common.sh@372 -- # new_size=12936044544 00:06:37.973 17:31:59 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:37.973 17:31:59 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.973 17:31:59 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.973 17:31:59 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.973 17:31:59 -- common/autotest_common.sh@380 -- # return 0 00:06:37.973 17:31:59 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:37.973 17:31:59 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:37.973 17:31:59 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:37.973 17:31:59 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:37.973 17:31:59 -- common/autotest_common.sh@1672 -- # true 00:06:37.973 17:31:59 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:37.973 17:31:59 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:37.973 17:31:59 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:37.973 17:31:59 -- common/autotest_common.sh@27 -- # exec 00:06:37.973 17:31:59 -- common/autotest_common.sh@29 -- # exec 00:06:37.973 17:31:59 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:37.973 17:31:59 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:37.973 17:31:59 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:37.973 17:31:59 -- common/autotest_common.sh@18 -- # set -x 00:06:37.973 17:31:59 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.973 17:31:59 -- nvmf/common.sh@7 -- # uname -s 00:06:37.973 17:31:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.973 17:31:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.973 17:31:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.973 17:31:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.973 17:31:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.973 17:31:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.973 17:31:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.973 17:31:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.973 17:31:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.973 17:31:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.973 17:31:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:37.973 17:31:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:37.973 17:31:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.973 17:31:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.973 17:31:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.973 17:31:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.973 17:31:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.973 17:31:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.973 17:31:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.973 17:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.973 17:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.973 17:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.973 17:31:59 -- paths/export.sh@5 -- # export PATH 00:06:37.973 17:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.973 17:31:59 -- nvmf/common.sh@46 -- # : 0 00:06:37.973 17:31:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:37.973 17:31:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:37.973 17:31:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:37.973 17:31:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.973 17:31:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.973 17:31:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:37.973 17:31:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:37.973 17:31:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:37.973 17:31:59 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:37.973 17:31:59 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:37.973 17:31:59 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:37.973 17:31:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:37.973 17:31:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.973 17:31:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:37.973 17:31:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:37.973 17:31:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:37.973 17:31:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.973 17:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.973 17:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.973 17:31:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:37.973 17:31:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:37.973 17:31:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:37.973 17:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.252 17:32:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:43.252 17:32:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:43.252 17:32:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:43.252 17:32:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:43.252 17:32:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:43.252 17:32:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:43.252 17:32:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:43.252 17:32:04 -- nvmf/common.sh@294 -- # net_devs=() 00:06:43.252 17:32:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:43.252 17:32:04 -- nvmf/common.sh@295 -- # e810=() 00:06:43.252 17:32:04 -- nvmf/common.sh@295 -- # local -ga e810 00:06:43.252 17:32:04 -- nvmf/common.sh@296 -- # x722=() 00:06:43.252 17:32:04 -- nvmf/common.sh@296 -- # local -ga x722 00:06:43.252 17:32:04 -- nvmf/common.sh@297 -- # mlx=() 00:06:43.252 17:32:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:43.252 17:32:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.252 17:32:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:43.252 17:32:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:43.252 17:32:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:43.252 17:32:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:43.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:43.252 17:32:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:43.252 17:32:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:43.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:43.252 17:32:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:43.252 17:32:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.252 17:32:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.252 17:32:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:43.252 Found net devices under 0000:86:00.0: cvl_0_0 00:06:43.252 17:32:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.252 17:32:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:43.252 17:32:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.252 17:32:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.252 17:32:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:43.252 Found net devices under 0000:86:00.1: cvl_0_1 00:06:43.252 17:32:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.252 17:32:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:43.252 17:32:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:43.252 17:32:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:43.252 17:32:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.252 17:32:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.252 17:32:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.252 17:32:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:43.252 17:32:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.252 17:32:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.252 17:32:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:43.252 17:32:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.252 17:32:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.252 17:32:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:43.252 17:32:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:43.252 17:32:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.252 17:32:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.252 17:32:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.252 17:32:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.252 17:32:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:43.252 17:32:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.511 17:32:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.511 17:32:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.511 17:32:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:43.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:43.511 00:06:43.511 --- 10.0.0.2 ping statistics --- 00:06:43.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.512 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:43.512 17:32:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:06:43.512 00:06:43.512 --- 10.0.0.1 ping statistics --- 00:06:43.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.512 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:06:43.512 17:32:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.512 17:32:04 -- nvmf/common.sh@410 -- # return 0 00:06:43.512 17:32:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:43.512 17:32:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.512 17:32:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:43.512 17:32:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:43.512 17:32:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.512 17:32:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:43.512 17:32:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:43.512 17:32:04 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:43.512 17:32:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:43.512 17:32:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.512 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:06:43.512 ************************************ 00:06:43.512 START TEST nvmf_filesystem_no_in_capsule 00:06:43.512 ************************************ 00:06:43.512 17:32:04 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:43.512 17:32:04 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:43.512 17:32:04 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:43.512 17:32:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:43.512 17:32:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:43.512 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:06:43.512 17:32:04 -- nvmf/common.sh@469 -- # nvmfpid=457568 00:06:43.512 17:32:04 -- nvmf/common.sh@470 -- # waitforlisten 457568 00:06:43.512 17:32:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:43.512 17:32:04 -- common/autotest_common.sh@819 -- # '[' -z 457568 ']' 00:06:43.512 17:32:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.512 17:32:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.512 17:32:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.512 17:32:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.512 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:06:43.512 [2024-07-24 17:32:05.036126] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:43.512 [2024-07-24 17:32:05.036174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.512 [2024-07-24 17:32:05.096597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.771 [2024-07-24 17:32:05.179537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.771 [2024-07-24 17:32:05.179644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.771 [2024-07-24 17:32:05.179652] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.771 [2024-07-24 17:32:05.179658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.771 [2024-07-24 17:32:05.179702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.771 [2024-07-24 17:32:05.179724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.771 [2024-07-24 17:32:05.179793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.771 [2024-07-24 17:32:05.179794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.338 17:32:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.338 17:32:05 -- common/autotest_common.sh@852 -- # return 0 00:06:44.338 17:32:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:44.338 17:32:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:44.338 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.338 17:32:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.338 17:32:05 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:44.338 17:32:05 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:44.338 17:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.338 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.338 [2024-07-24 17:32:05.894435] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.338 17:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.338 17:32:05 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:44.338 17:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.338 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.596 Malloc1 00:06:44.596 17:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.596 17:32:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:44.596 17:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.596 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.596 17:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.596 17:32:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:44.597 17:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.597 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.597 17:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.597 17:32:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.597 17:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.597 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.597 [2024-07-24 17:32:06.039672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.597 17:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.597 17:32:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:44.597 17:32:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:44.597 17:32:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:44.597 17:32:06 -- common/autotest_common.sh@1359 -- # local bs 00:06:44.597 17:32:06 -- common/autotest_common.sh@1360 -- # local nb 00:06:44.597 17:32:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:44.597 17:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.597 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.597 17:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.597 17:32:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:44.597 { 00:06:44.597 "name": "Malloc1", 00:06:44.597 "aliases": [ 00:06:44.597 "19facacc-2802-4a5b-ac01-3f1e035f345d" 00:06:44.597 ], 00:06:44.597 "product_name": "Malloc disk", 00:06:44.597 "block_size": 512, 00:06:44.597 "num_blocks": 1048576, 00:06:44.597 "uuid": "19facacc-2802-4a5b-ac01-3f1e035f345d", 00:06:44.597 "assigned_rate_limits": { 00:06:44.597 "rw_ios_per_sec": 0, 00:06:44.597 "rw_mbytes_per_sec": 0, 00:06:44.597 "r_mbytes_per_sec": 0, 00:06:44.597 "w_mbytes_per_sec": 0 00:06:44.597 }, 00:06:44.597 "claimed": true, 00:06:44.597 "claim_type": "exclusive_write", 00:06:44.597 "zoned": false, 00:06:44.597 "supported_io_types": { 00:06:44.597 "read": true, 00:06:44.597 "write": true, 00:06:44.597 "unmap": true, 00:06:44.597 "write_zeroes": true, 00:06:44.597 "flush": true, 00:06:44.597 "reset": true, 00:06:44.597 "compare": false, 00:06:44.597 "compare_and_write": false, 00:06:44.597 "abort": true, 00:06:44.597 "nvme_admin": false, 00:06:44.597 "nvme_io": false 00:06:44.597 }, 00:06:44.597 "memory_domains": [ 00:06:44.597 { 00:06:44.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.597 "dma_device_type": 2 00:06:44.597 } 00:06:44.597 ], 00:06:44.597 "driver_specific": {} 00:06:44.597 } 00:06:44.597 ]' 00:06:44.597 17:32:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:44.597 17:32:06 -- common/autotest_common.sh@1362 -- # bs=512 00:06:44.597 17:32:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:44.597 17:32:06 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:44.597 17:32:06 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:44.597 17:32:06 -- common/autotest_common.sh@1367 -- # echo 512 00:06:44.597 17:32:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:44.597 17:32:06 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:45.975 17:32:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:45.975 17:32:07 -- common/autotest_common.sh@1177 -- # local i=0 00:06:45.975 17:32:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:45.975 17:32:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:45.975 17:32:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:47.875 17:32:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:47.875 17:32:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:47.875 17:32:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:47.875 17:32:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:47.875 17:32:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:47.875 17:32:09 -- common/autotest_common.sh@1187 -- # return 0 00:06:47.875 17:32:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:47.875 17:32:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:47.875 17:32:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:47.875 17:32:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:47.875 17:32:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:47.875 17:32:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:47.875 17:32:09 -- setup/common.sh@80 -- # echo 536870912 00:06:47.875 17:32:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:47.875 17:32:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:47.875 17:32:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:47.875 17:32:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:47.875 17:32:09 -- target/filesystem.sh@69 -- # partprobe 00:06:48.810 17:32:10 -- target/filesystem.sh@70 -- # sleep 1 00:06:49.747 17:32:11 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:49.747 17:32:11 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:49.747 17:32:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:49.747 17:32:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.747 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.747 ************************************ 00:06:49.747 START TEST filesystem_ext4 00:06:49.747 ************************************ 00:06:49.747 17:32:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:49.747 17:32:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:49.747 17:32:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.747 17:32:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:49.747 17:32:11 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:49.747 17:32:11 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:49.747 17:32:11 -- common/autotest_common.sh@904 -- # local i=0 00:06:49.747 17:32:11 -- common/autotest_common.sh@905 -- # local force 00:06:49.747 17:32:11 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:49.747 17:32:11 -- common/autotest_common.sh@908 -- # force=-F 00:06:49.747 17:32:11 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:49.747 mke2fs 1.46.5 (30-Dec-2021) 00:06:49.747 Discarding device blocks: 0/522240 done 00:06:49.747 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:49.747 Filesystem UUID: 82190da8-c091-41db-b44c-276f216f0a34 00:06:49.747 Superblock backups stored on blocks: 00:06:49.747 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:49.747 00:06:49.747 Allocating group tables: 0/64 done 00:06:49.747 Writing inode tables: 0/64 done 00:06:50.006 Creating journal (8192 blocks): done 00:06:50.006 Writing superblocks and filesystem accounting information: 0/64 done 00:06:50.006 00:06:50.006 17:32:11 -- common/autotest_common.sh@921 -- # return 0 00:06:50.006 17:32:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.070 17:32:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.070 17:32:12 -- target/filesystem.sh@25 -- # sync 00:06:51.070 17:32:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.070 17:32:12 -- target/filesystem.sh@27 -- # sync 00:06:51.070 17:32:12 -- target/filesystem.sh@29 -- # i=0 00:06:51.070 17:32:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.070 17:32:12 -- target/filesystem.sh@37 -- # kill -0 457568 00:06:51.070 17:32:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.070 17:32:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.070 17:32:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.070 17:32:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.070 00:06:51.070 real 0m1.254s 00:06:51.070 user 0m0.018s 00:06:51.070 sys 0m0.049s 00:06:51.070 17:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.070 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:06:51.070 ************************************ 00:06:51.070 END TEST filesystem_ext4 00:06:51.070 ************************************ 00:06:51.070 17:32:12 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:51.070 17:32:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:51.070 17:32:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.070 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:06:51.070 ************************************ 00:06:51.070 START TEST filesystem_btrfs 00:06:51.070 ************************************ 00:06:51.070 17:32:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:51.070 17:32:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:51.070 17:32:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.070 17:32:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:51.070 17:32:12 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:51.070 17:32:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:51.070 17:32:12 -- common/autotest_common.sh@904 -- # local i=0 00:06:51.070 17:32:12 -- common/autotest_common.sh@905 -- # local force 00:06:51.070 17:32:12 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:51.070 17:32:12 -- common/autotest_common.sh@910 -- # force=-f 00:06:51.070 17:32:12 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:51.329 btrfs-progs v6.6.2 00:06:51.329 See https://btrfs.readthedocs.io for more information. 00:06:51.329 00:06:51.329 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:51.329 NOTE: several default settings have changed in version 5.15, please make sure 00:06:51.329 this does not affect your deployments: 00:06:51.329 - DUP for metadata (-m dup) 00:06:51.329 - enabled no-holes (-O no-holes) 00:06:51.329 - enabled free-space-tree (-R free-space-tree) 00:06:51.329 00:06:51.329 Label: (null) 00:06:51.329 UUID: b7d45547-705e-46d3-8806-b129bed7ba4b 00:06:51.329 Node size: 16384 00:06:51.329 Sector size: 4096 00:06:51.329 Filesystem size: 510.00MiB 00:06:51.329 Block group profiles: 00:06:51.329 Data: single 8.00MiB 00:06:51.329 Metadata: DUP 32.00MiB 00:06:51.329 System: DUP 8.00MiB 00:06:51.329 SSD detected: yes 00:06:51.329 Zoned device: no 00:06:51.329 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:51.329 Runtime features: free-space-tree 00:06:51.329 Checksum: crc32c 00:06:51.329 Number of devices: 1 00:06:51.329 Devices: 00:06:51.329 ID SIZE PATH 00:06:51.329 1 510.00MiB /dev/nvme0n1p1 00:06:51.329 00:06:51.329 17:32:12 -- common/autotest_common.sh@921 -- # return 0 00:06:51.329 17:32:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.587 17:32:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.588 17:32:13 -- target/filesystem.sh@25 -- # sync 00:06:51.588 17:32:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.588 17:32:13 -- target/filesystem.sh@27 -- # sync 00:06:51.588 17:32:13 -- target/filesystem.sh@29 -- # i=0 00:06:51.588 17:32:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.588 17:32:13 -- target/filesystem.sh@37 -- # kill -0 457568 00:06:51.588 17:32:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.588 17:32:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.588 17:32:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.846 17:32:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.846 00:06:51.846 real 0m0.734s 00:06:51.846 user 0m0.021s 00:06:51.846 sys 0m0.060s 00:06:51.846 17:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.846 17:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.846 ************************************ 00:06:51.846 END TEST filesystem_btrfs 00:06:51.846 ************************************ 00:06:51.846 17:32:13 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:51.846 17:32:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:51.846 17:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.846 17:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.846 ************************************ 00:06:51.846 START TEST filesystem_xfs 00:06:51.846 ************************************ 00:06:51.846 17:32:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:51.846 17:32:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:51.846 17:32:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.846 17:32:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:51.846 17:32:13 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:51.846 17:32:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:51.846 17:32:13 -- common/autotest_common.sh@904 -- # local i=0 00:06:51.846 17:32:13 -- common/autotest_common.sh@905 -- # local force 00:06:51.846 17:32:13 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:51.846 17:32:13 -- common/autotest_common.sh@910 -- # force=-f 00:06:51.846 17:32:13 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:51.846 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:51.846 = sectsz=512 attr=2, projid32bit=1 00:06:51.846 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:51.846 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:51.846 data = bsize=4096 blocks=130560, imaxpct=25 00:06:51.846 = sunit=0 swidth=0 blks 00:06:51.846 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:51.846 log =internal log bsize=4096 blocks=16384, version=2 00:06:51.846 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:51.846 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:53.125 Discarding blocks...Done. 00:06:53.125 17:32:14 -- common/autotest_common.sh@921 -- # return 0 00:06:53.125 17:32:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.027 17:32:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.027 17:32:16 -- target/filesystem.sh@25 -- # sync 00:06:55.027 17:32:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.027 17:32:16 -- target/filesystem.sh@27 -- # sync 00:06:55.027 17:32:16 -- target/filesystem.sh@29 -- # i=0 00:06:55.027 17:32:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.027 17:32:16 -- target/filesystem.sh@37 -- # kill -0 457568 00:06:55.027 17:32:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.027 17:32:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.027 17:32:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.027 17:32:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.027 00:06:55.027 real 0m3.202s 00:06:55.027 user 0m0.026s 00:06:55.027 sys 0m0.046s 00:06:55.027 17:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.027 17:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.027 ************************************ 00:06:55.027 END TEST filesystem_xfs 00:06:55.027 ************************************ 00:06:55.027 17:32:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:55.286 17:32:16 -- target/filesystem.sh@93 -- # sync 00:06:55.286 17:32:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:55.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:55.286 17:32:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:55.286 17:32:16 -- common/autotest_common.sh@1198 -- # local i=0 00:06:55.286 17:32:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:55.286 17:32:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.286 17:32:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:55.286 17:32:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.286 17:32:16 -- common/autotest_common.sh@1210 -- # return 0 00:06:55.286 17:32:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:55.286 17:32:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.286 17:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.286 17:32:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.286 17:32:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:55.545 17:32:16 -- target/filesystem.sh@101 -- # killprocess 457568 00:06:55.545 17:32:16 -- common/autotest_common.sh@926 -- # '[' -z 457568 ']' 00:06:55.545 17:32:16 -- common/autotest_common.sh@930 -- # kill -0 457568 00:06:55.545 17:32:16 -- common/autotest_common.sh@931 -- # uname 00:06:55.545 17:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.545 17:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 457568 00:06:55.545 17:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:55.545 17:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:55.545 17:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 457568' 00:06:55.545 killing process with pid 457568 00:06:55.545 17:32:16 -- common/autotest_common.sh@945 -- # kill 457568 00:06:55.545 17:32:16 -- common/autotest_common.sh@950 -- # wait 457568 00:06:55.803 17:32:17 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:55.803 00:06:55.803 real 0m12.311s 00:06:55.803 user 0m48.232s 00:06:55.803 sys 0m1.043s 00:06:55.803 17:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.803 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.803 ************************************ 00:06:55.803 END TEST nvmf_filesystem_no_in_capsule 00:06:55.803 ************************************ 00:06:55.803 17:32:17 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:55.803 17:32:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.803 17:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.803 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.803 ************************************ 00:06:55.803 START TEST nvmf_filesystem_in_capsule 00:06:55.803 ************************************ 00:06:55.803 17:32:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:55.803 17:32:17 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:55.803 17:32:17 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:55.803 17:32:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:55.803 17:32:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:55.803 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.803 17:32:17 -- nvmf/common.sh@469 -- # nvmfpid=459895 00:06:55.803 17:32:17 -- nvmf/common.sh@470 -- # waitforlisten 459895 00:06:55.803 17:32:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.803 17:32:17 -- common/autotest_common.sh@819 -- # '[' -z 459895 ']' 00:06:55.803 17:32:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.804 17:32:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:55.804 17:32:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.804 17:32:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:55.804 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.804 [2024-07-24 17:32:17.385306] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:55.804 [2024-07-24 17:32:17.385350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.062 [2024-07-24 17:32:17.442771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.062 [2024-07-24 17:32:17.520962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.062 [2024-07-24 17:32:17.521076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.062 [2024-07-24 17:32:17.521083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.062 [2024-07-24 17:32:17.521089] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.062 [2024-07-24 17:32:17.521121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.062 [2024-07-24 17:32:17.521145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.062 [2024-07-24 17:32:17.521218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.062 [2024-07-24 17:32:17.521219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.630 17:32:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:56.630 17:32:18 -- common/autotest_common.sh@852 -- # return 0 00:06:56.630 17:32:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:56.630 17:32:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:56.630 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 17:32:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.889 17:32:18 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:56.889 17:32:18 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 [2024-07-24 17:32:18.243503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 [2024-07-24 17:32:18.387022] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:56.889 17:32:18 -- common/autotest_common.sh@1359 -- # local bs 00:06:56.889 17:32:18 -- common/autotest_common.sh@1360 -- # local nb 00:06:56.889 17:32:18 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:56.889 17:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.889 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 17:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.889 17:32:18 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:56.889 { 00:06:56.889 "name": "Malloc1", 00:06:56.889 "aliases": [ 00:06:56.889 "b9d7f63e-1f03-4ec6-942e-15051fd74f94" 00:06:56.889 ], 00:06:56.889 "product_name": "Malloc disk", 00:06:56.889 "block_size": 512, 00:06:56.889 "num_blocks": 1048576, 00:06:56.889 "uuid": "b9d7f63e-1f03-4ec6-942e-15051fd74f94", 00:06:56.889 "assigned_rate_limits": { 00:06:56.889 "rw_ios_per_sec": 0, 00:06:56.889 "rw_mbytes_per_sec": 0, 00:06:56.889 "r_mbytes_per_sec": 0, 00:06:56.889 "w_mbytes_per_sec": 0 00:06:56.889 }, 00:06:56.889 "claimed": true, 00:06:56.889 "claim_type": "exclusive_write", 00:06:56.889 "zoned": false, 00:06:56.889 "supported_io_types": { 00:06:56.889 "read": true, 00:06:56.889 "write": true, 00:06:56.889 "unmap": true, 00:06:56.889 "write_zeroes": true, 00:06:56.889 "flush": true, 00:06:56.889 "reset": true, 00:06:56.889 "compare": false, 00:06:56.889 "compare_and_write": false, 00:06:56.889 "abort": true, 00:06:56.889 "nvme_admin": false, 00:06:56.889 "nvme_io": false 00:06:56.889 }, 00:06:56.889 "memory_domains": [ 00:06:56.889 { 00:06:56.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.889 "dma_device_type": 2 00:06:56.889 } 00:06:56.889 ], 00:06:56.889 "driver_specific": {} 00:06:56.889 } 00:06:56.889 ]' 00:06:56.889 17:32:18 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:56.889 17:32:18 -- common/autotest_common.sh@1362 -- # bs=512 00:06:56.889 17:32:18 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:57.148 17:32:18 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:57.148 17:32:18 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:57.148 17:32:18 -- common/autotest_common.sh@1367 -- # echo 512 00:06:57.148 17:32:18 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:57.148 17:32:18 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.083 17:32:19 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:58.083 17:32:19 -- common/autotest_common.sh@1177 -- # local i=0 00:06:58.083 17:32:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:58.083 17:32:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:58.083 17:32:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:00.614 17:32:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:00.614 17:32:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:00.614 17:32:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:00.614 17:32:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:00.614 17:32:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:00.614 17:32:21 -- common/autotest_common.sh@1187 -- # return 0 00:07:00.614 17:32:21 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:00.614 17:32:21 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:00.614 17:32:21 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:00.614 17:32:21 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:00.614 17:32:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:00.614 17:32:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:00.614 17:32:21 -- setup/common.sh@80 -- # echo 536870912 00:07:00.614 17:32:21 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:00.614 17:32:21 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:00.614 17:32:21 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:00.614 17:32:21 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:00.614 17:32:21 -- target/filesystem.sh@69 -- # partprobe 00:07:00.614 17:32:22 -- target/filesystem.sh@70 -- # sleep 1 00:07:01.988 17:32:23 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:01.988 17:32:23 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:01.988 17:32:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:01.988 17:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.988 17:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:01.988 ************************************ 00:07:01.988 START TEST filesystem_in_capsule_ext4 00:07:01.988 ************************************ 00:07:01.988 17:32:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:01.988 17:32:23 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:01.988 17:32:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:01.988 17:32:23 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:01.988 17:32:23 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:01.988 17:32:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:01.988 17:32:23 -- common/autotest_common.sh@904 -- # local i=0 00:07:01.988 17:32:23 -- common/autotest_common.sh@905 -- # local force 00:07:01.988 17:32:23 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:01.988 17:32:23 -- common/autotest_common.sh@908 -- # force=-F 00:07:01.988 17:32:23 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:01.988 mke2fs 1.46.5 (30-Dec-2021) 00:07:01.988 Discarding device blocks: 0/522240 done 00:07:01.988 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:01.988 Filesystem UUID: 2b87b597-b2e8-4456-8873-7ee36d35a98d 00:07:01.988 Superblock backups stored on blocks: 00:07:01.988 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:01.988 00:07:01.988 Allocating group tables: 0/64 done 00:07:01.988 Writing inode tables: 0/64 done 00:07:01.988 Creating journal (8192 blocks): done 00:07:01.988 Writing superblocks and filesystem accounting information: 0/64 done 00:07:01.988 00:07:01.988 17:32:23 -- common/autotest_common.sh@921 -- # return 0 00:07:01.988 17:32:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.246 17:32:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.246 17:32:23 -- target/filesystem.sh@25 -- # sync 00:07:02.246 17:32:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.246 17:32:23 -- target/filesystem.sh@27 -- # sync 00:07:02.246 17:32:23 -- target/filesystem.sh@29 -- # i=0 00:07:02.246 17:32:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.246 17:32:23 -- target/filesystem.sh@37 -- # kill -0 459895 00:07:02.246 17:32:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.246 17:32:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.246 17:32:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.246 17:32:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.246 00:07:02.246 real 0m0.582s 00:07:02.246 user 0m0.030s 00:07:02.246 sys 0m0.034s 00:07:02.246 17:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.246 17:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:02.246 ************************************ 00:07:02.246 END TEST filesystem_in_capsule_ext4 00:07:02.246 ************************************ 00:07:02.246 17:32:23 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:02.246 17:32:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:02.246 17:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.246 17:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:02.246 ************************************ 00:07:02.246 START TEST filesystem_in_capsule_btrfs 00:07:02.246 ************************************ 00:07:02.246 17:32:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:02.246 17:32:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:02.246 17:32:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:02.246 17:32:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:02.246 17:32:23 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:02.246 17:32:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:02.246 17:32:23 -- common/autotest_common.sh@904 -- # local i=0 00:07:02.246 17:32:23 -- common/autotest_common.sh@905 -- # local force 00:07:02.246 17:32:23 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:02.246 17:32:23 -- common/autotest_common.sh@910 -- # force=-f 00:07:02.246 17:32:23 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:02.504 btrfs-progs v6.6.2 00:07:02.504 See https://btrfs.readthedocs.io for more information. 00:07:02.504 00:07:02.504 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:02.504 NOTE: several default settings have changed in version 5.15, please make sure 00:07:02.504 this does not affect your deployments: 00:07:02.504 - DUP for metadata (-m dup) 00:07:02.504 - enabled no-holes (-O no-holes) 00:07:02.504 - enabled free-space-tree (-R free-space-tree) 00:07:02.504 00:07:02.504 Label: (null) 00:07:02.504 UUID: c98cb7d1-31c6-46a3-b1d0-bcde27f895cc 00:07:02.504 Node size: 16384 00:07:02.504 Sector size: 4096 00:07:02.504 Filesystem size: 510.00MiB 00:07:02.504 Block group profiles: 00:07:02.504 Data: single 8.00MiB 00:07:02.504 Metadata: DUP 32.00MiB 00:07:02.504 System: DUP 8.00MiB 00:07:02.504 SSD detected: yes 00:07:02.504 Zoned device: no 00:07:02.504 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:02.504 Runtime features: free-space-tree 00:07:02.504 Checksum: crc32c 00:07:02.504 Number of devices: 1 00:07:02.504 Devices: 00:07:02.504 ID SIZE PATH 00:07:02.504 1 510.00MiB /dev/nvme0n1p1 00:07:02.504 00:07:02.504 17:32:24 -- common/autotest_common.sh@921 -- # return 0 00:07:02.504 17:32:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.878 17:32:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.878 17:32:25 -- target/filesystem.sh@25 -- # sync 00:07:03.878 17:32:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.878 17:32:25 -- target/filesystem.sh@27 -- # sync 00:07:03.878 17:32:25 -- target/filesystem.sh@29 -- # i=0 00:07:03.878 17:32:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.878 17:32:25 -- target/filesystem.sh@37 -- # kill -0 459895 00:07:03.878 17:32:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.878 17:32:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.878 17:32:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.878 17:32:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.878 00:07:03.878 real 0m1.380s 00:07:03.878 user 0m0.020s 00:07:03.878 sys 0m0.061s 00:07:03.878 17:32:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.878 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.878 ************************************ 00:07:03.878 END TEST filesystem_in_capsule_btrfs 00:07:03.878 ************************************ 00:07:03.878 17:32:25 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:03.878 17:32:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:03.878 17:32:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.878 17:32:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.878 ************************************ 00:07:03.878 START TEST filesystem_in_capsule_xfs 00:07:03.878 ************************************ 00:07:03.878 17:32:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:03.878 17:32:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:03.878 17:32:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.878 17:32:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:03.878 17:32:25 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:03.878 17:32:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:03.878 17:32:25 -- common/autotest_common.sh@904 -- # local i=0 00:07:03.878 17:32:25 -- common/autotest_common.sh@905 -- # local force 00:07:03.878 17:32:25 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:03.878 17:32:25 -- common/autotest_common.sh@910 -- # force=-f 00:07:03.878 17:32:25 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:03.878 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:03.878 = sectsz=512 attr=2, projid32bit=1 00:07:03.878 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:03.878 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:03.878 data = bsize=4096 blocks=130560, imaxpct=25 00:07:03.878 = sunit=0 swidth=0 blks 00:07:03.878 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:03.878 log =internal log bsize=4096 blocks=16384, version=2 00:07:03.878 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:03.878 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:04.812 Discarding blocks...Done. 00:07:04.812 17:32:26 -- common/autotest_common.sh@921 -- # return 0 00:07:04.812 17:32:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.343 17:32:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.343 17:32:28 -- target/filesystem.sh@25 -- # sync 00:07:07.343 17:32:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.343 17:32:28 -- target/filesystem.sh@27 -- # sync 00:07:07.343 17:32:28 -- target/filesystem.sh@29 -- # i=0 00:07:07.343 17:32:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.343 17:32:28 -- target/filesystem.sh@37 -- # kill -0 459895 00:07:07.343 17:32:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.343 17:32:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.343 17:32:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.343 17:32:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.343 00:07:07.343 real 0m3.711s 00:07:07.343 user 0m0.021s 00:07:07.343 sys 0m0.053s 00:07:07.343 17:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.343 17:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.343 ************************************ 00:07:07.343 END TEST filesystem_in_capsule_xfs 00:07:07.343 ************************************ 00:07:07.601 17:32:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:07.860 17:32:29 -- target/filesystem.sh@93 -- # sync 00:07:07.860 17:32:29 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.860 17:32:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.860 17:32:29 -- common/autotest_common.sh@1198 -- # local i=0 00:07:07.860 17:32:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:07.860 17:32:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.860 17:32:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:07.860 17:32:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.860 17:32:29 -- common/autotest_common.sh@1210 -- # return 0 00:07:07.860 17:32:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.860 17:32:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:07.860 17:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:07.860 17:32:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:07.860 17:32:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:07.860 17:32:29 -- target/filesystem.sh@101 -- # killprocess 459895 00:07:07.860 17:32:29 -- common/autotest_common.sh@926 -- # '[' -z 459895 ']' 00:07:07.860 17:32:29 -- common/autotest_common.sh@930 -- # kill -0 459895 00:07:07.860 17:32:29 -- common/autotest_common.sh@931 -- # uname 00:07:07.860 17:32:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:07.860 17:32:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 459895 00:07:07.860 17:32:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:07.860 17:32:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:07.860 17:32:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 459895' 00:07:07.860 killing process with pid 459895 00:07:07.860 17:32:29 -- common/autotest_common.sh@945 -- # kill 459895 00:07:07.860 17:32:29 -- common/autotest_common.sh@950 -- # wait 459895 00:07:08.427 17:32:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:08.427 00:07:08.427 real 0m12.477s 00:07:08.427 user 0m48.922s 00:07:08.427 sys 0m1.028s 00:07:08.427 17:32:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.427 17:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.427 ************************************ 00:07:08.427 END TEST nvmf_filesystem_in_capsule 00:07:08.427 ************************************ 00:07:08.427 17:32:29 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:08.427 17:32:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:08.427 17:32:29 -- nvmf/common.sh@116 -- # sync 00:07:08.427 17:32:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:08.427 17:32:29 -- nvmf/common.sh@119 -- # set +e 00:07:08.427 17:32:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:08.427 17:32:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:08.427 rmmod nvme_tcp 00:07:08.427 rmmod nvme_fabrics 00:07:08.427 rmmod nvme_keyring 00:07:08.427 17:32:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:08.427 17:32:29 -- nvmf/common.sh@123 -- # set -e 00:07:08.427 17:32:29 -- nvmf/common.sh@124 -- # return 0 00:07:08.427 17:32:29 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:08.427 17:32:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:08.427 17:32:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:08.427 17:32:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:08.427 17:32:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.427 17:32:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:08.427 17:32:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.427 17:32:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.427 17:32:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.989 17:32:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:10.989 00:07:10.989 real 0m32.873s 00:07:10.989 user 1m38.851s 00:07:10.989 sys 0m6.472s 00:07:10.989 17:32:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.989 17:32:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.989 ************************************ 00:07:10.989 END TEST nvmf_filesystem 00:07:10.989 ************************************ 00:07:10.989 17:32:31 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.989 17:32:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:10.989 17:32:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.989 17:32:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.989 ************************************ 00:07:10.989 START TEST nvmf_discovery 00:07:10.989 ************************************ 00:07:10.989 17:32:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.989 * Looking for test storage... 00:07:10.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.989 17:32:32 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.989 17:32:32 -- nvmf/common.sh@7 -- # uname -s 00:07:10.989 17:32:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.989 17:32:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.989 17:32:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.989 17:32:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.989 17:32:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.989 17:32:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.989 17:32:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.989 17:32:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.989 17:32:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.989 17:32:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.989 17:32:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.989 17:32:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:10.989 17:32:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.989 17:32:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.989 17:32:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.989 17:32:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.989 17:32:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.989 17:32:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.989 17:32:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.989 17:32:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.989 17:32:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.989 17:32:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.989 17:32:32 -- paths/export.sh@5 -- # export PATH 00:07:10.989 17:32:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.989 17:32:32 -- nvmf/common.sh@46 -- # : 0 00:07:10.989 17:32:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:10.989 17:32:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:10.989 17:32:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:10.989 17:32:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.989 17:32:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.989 17:32:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:10.989 17:32:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:10.989 17:32:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:10.989 17:32:32 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:10.989 17:32:32 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:10.989 17:32:32 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:10.989 17:32:32 -- target/discovery.sh@15 -- # hash nvme 00:07:10.989 17:32:32 -- target/discovery.sh@20 -- # nvmftestinit 00:07:10.989 17:32:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:10.989 17:32:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.989 17:32:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:10.989 17:32:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:10.989 17:32:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:10.989 17:32:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.989 17:32:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.989 17:32:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.990 17:32:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:10.990 17:32:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:10.990 17:32:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:10.990 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 17:32:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:16.268 17:32:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:16.268 17:32:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:16.268 17:32:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:16.268 17:32:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:16.268 17:32:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:16.268 17:32:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:16.268 17:32:37 -- nvmf/common.sh@294 -- # net_devs=() 00:07:16.268 17:32:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:16.268 17:32:37 -- nvmf/common.sh@295 -- # e810=() 00:07:16.268 17:32:37 -- nvmf/common.sh@295 -- # local -ga e810 00:07:16.268 17:32:37 -- nvmf/common.sh@296 -- # x722=() 00:07:16.268 17:32:37 -- nvmf/common.sh@296 -- # local -ga x722 00:07:16.268 17:32:37 -- nvmf/common.sh@297 -- # mlx=() 00:07:16.268 17:32:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:16.268 17:32:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.268 17:32:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:16.268 17:32:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:16.268 17:32:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:16.268 17:32:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:16.268 17:32:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:16.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:16.268 17:32:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:16.268 17:32:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:16.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:16.268 17:32:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:16.268 17:32:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:16.268 17:32:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:16.269 17:32:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.269 17:32:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:16.269 17:32:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.269 17:32:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:16.269 Found net devices under 0000:86:00.0: cvl_0_0 00:07:16.269 17:32:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.269 17:32:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:16.269 17:32:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.269 17:32:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:16.269 17:32:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.269 17:32:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:16.269 Found net devices under 0000:86:00.1: cvl_0_1 00:07:16.269 17:32:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.269 17:32:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:16.269 17:32:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:16.269 17:32:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:16.269 17:32:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.269 17:32:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.269 17:32:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.269 17:32:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:16.269 17:32:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.269 17:32:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.269 17:32:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:16.269 17:32:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.269 17:32:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.269 17:32:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:16.269 17:32:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:16.269 17:32:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.269 17:32:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.269 17:32:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.269 17:32:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.269 17:32:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:16.269 17:32:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.269 17:32:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.269 17:32:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.269 17:32:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:16.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:16.269 00:07:16.269 --- 10.0.0.2 ping statistics --- 00:07:16.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.269 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:16.269 17:32:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:07:16.269 00:07:16.269 --- 10.0.0.1 ping statistics --- 00:07:16.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.269 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:07:16.269 17:32:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.269 17:32:37 -- nvmf/common.sh@410 -- # return 0 00:07:16.269 17:32:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:16.269 17:32:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.269 17:32:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:16.269 17:32:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.269 17:32:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:16.269 17:32:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:16.269 17:32:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:16.269 17:32:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:16.269 17:32:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:16.269 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.269 17:32:37 -- nvmf/common.sh@469 -- # nvmfpid=465525 00:07:16.269 17:32:37 -- nvmf/common.sh@470 -- # waitforlisten 465525 00:07:16.269 17:32:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.269 17:32:37 -- common/autotest_common.sh@819 -- # '[' -z 465525 ']' 00:07:16.269 17:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.269 17:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.269 17:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.269 17:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.269 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.269 [2024-07-24 17:32:37.662690] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.269 [2024-07-24 17:32:37.662735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.269 [2024-07-24 17:32:37.722493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.269 [2024-07-24 17:32:37.800714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.269 [2024-07-24 17:32:37.800828] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.269 [2024-07-24 17:32:37.800836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.269 [2024-07-24 17:32:37.800842] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.269 [2024-07-24 17:32:37.800885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.269 [2024-07-24 17:32:37.800923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.269 [2024-07-24 17:32:37.800988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.269 [2024-07-24 17:32:37.800989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.207 17:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:17.207 17:32:38 -- common/autotest_common.sh@852 -- # return 0 00:07:17.207 17:32:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:17.207 17:32:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.207 17:32:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 [2024-07-24 17:32:38.511356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@26 -- # seq 1 4 00:07:17.207 17:32:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.207 17:32:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 Null1 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 [2024-07-24 17:32:38.556919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.207 17:32:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 Null2 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.207 17:32:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 Null3 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.207 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.207 17:32:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:17.207 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.208 17:32:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 Null4 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.208 17:32:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:17.208 00:07:17.208 Discovery Log Number of Records 6, Generation counter 6 00:07:17.208 =====Discovery Log Entry 0====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: current discovery subsystem 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4420 00:07:17.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: explicit discovery connections, duplicate discovery information 00:07:17.208 sectype: none 00:07:17.208 =====Discovery Log Entry 1====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: nvme subsystem 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4420 00:07:17.208 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: none 00:07:17.208 sectype: none 00:07:17.208 =====Discovery Log Entry 2====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: nvme subsystem 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4420 00:07:17.208 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: none 00:07:17.208 sectype: none 00:07:17.208 =====Discovery Log Entry 3====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: nvme subsystem 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4420 00:07:17.208 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: none 00:07:17.208 sectype: none 00:07:17.208 =====Discovery Log Entry 4====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: nvme subsystem 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4420 00:07:17.208 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: none 00:07:17.208 sectype: none 00:07:17.208 =====Discovery Log Entry 5====== 00:07:17.208 trtype: tcp 00:07:17.208 adrfam: ipv4 00:07:17.208 subtype: discovery subsystem referral 00:07:17.208 treq: not required 00:07:17.208 portid: 0 00:07:17.208 trsvcid: 4430 00:07:17.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.208 traddr: 10.0.0.2 00:07:17.208 eflags: none 00:07:17.208 sectype: none 00:07:17.208 17:32:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:17.208 Perform nvmf subsystem discovery via RPC 00:07:17.208 17:32:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:17.208 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.208 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.208 [2024-07-24 17:32:38.749384] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:17.208 [ 00:07:17.208 { 00:07:17.208 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:17.208 "subtype": "Discovery", 00:07:17.208 "listen_addresses": [ 00:07:17.208 { 00:07:17.208 "transport": "TCP", 00:07:17.208 "trtype": "TCP", 00:07:17.208 "adrfam": "IPv4", 00:07:17.208 "traddr": "10.0.0.2", 00:07:17.208 "trsvcid": "4420" 00:07:17.208 } 00:07:17.208 ], 00:07:17.208 "allow_any_host": true, 00:07:17.208 "hosts": [] 00:07:17.208 }, 00:07:17.208 { 00:07:17.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.208 "subtype": "NVMe", 00:07:17.208 "listen_addresses": [ 00:07:17.208 { 00:07:17.208 "transport": "TCP", 00:07:17.208 "trtype": "TCP", 00:07:17.208 "adrfam": "IPv4", 00:07:17.208 "traddr": "10.0.0.2", 00:07:17.208 "trsvcid": "4420" 00:07:17.208 } 00:07:17.208 ], 00:07:17.208 "allow_any_host": true, 00:07:17.208 "hosts": [], 00:07:17.208 "serial_number": "SPDK00000000000001", 00:07:17.208 "model_number": "SPDK bdev Controller", 00:07:17.208 "max_namespaces": 32, 00:07:17.208 "min_cntlid": 1, 00:07:17.208 "max_cntlid": 65519, 00:07:17.208 "namespaces": [ 00:07:17.208 { 00:07:17.208 "nsid": 1, 00:07:17.208 "bdev_name": "Null1", 00:07:17.208 "name": "Null1", 00:07:17.208 "nguid": "215F150BA0B54DA395DF35DFBCE39C74", 00:07:17.208 "uuid": "215f150b-a0b5-4da3-95df-35dfbce39c74" 00:07:17.208 } 00:07:17.208 ] 00:07:17.208 }, 00:07:17.208 { 00:07:17.208 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:17.208 "subtype": "NVMe", 00:07:17.208 "listen_addresses": [ 00:07:17.208 { 00:07:17.208 "transport": "TCP", 00:07:17.208 "trtype": "TCP", 00:07:17.208 "adrfam": "IPv4", 00:07:17.208 "traddr": "10.0.0.2", 00:07:17.208 "trsvcid": "4420" 00:07:17.208 } 00:07:17.208 ], 00:07:17.208 "allow_any_host": true, 00:07:17.208 "hosts": [], 00:07:17.208 "serial_number": "SPDK00000000000002", 00:07:17.208 "model_number": "SPDK bdev Controller", 00:07:17.208 "max_namespaces": 32, 00:07:17.208 "min_cntlid": 1, 00:07:17.208 "max_cntlid": 65519, 00:07:17.208 "namespaces": [ 00:07:17.208 { 00:07:17.208 "nsid": 1, 00:07:17.208 "bdev_name": "Null2", 00:07:17.208 "name": "Null2", 00:07:17.208 "nguid": "CDF320AFF6D94E819CD4BB77ED433387", 00:07:17.208 "uuid": "cdf320af-f6d9-4e81-9cd4-bb77ed433387" 00:07:17.208 } 00:07:17.208 ] 00:07:17.208 }, 00:07:17.208 { 00:07:17.208 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:17.208 "subtype": "NVMe", 00:07:17.208 "listen_addresses": [ 00:07:17.208 { 00:07:17.208 "transport": "TCP", 00:07:17.208 "trtype": "TCP", 00:07:17.208 "adrfam": "IPv4", 00:07:17.208 "traddr": "10.0.0.2", 00:07:17.208 "trsvcid": "4420" 00:07:17.208 } 00:07:17.208 ], 00:07:17.208 "allow_any_host": true, 00:07:17.208 "hosts": [], 00:07:17.208 "serial_number": "SPDK00000000000003", 00:07:17.208 "model_number": "SPDK bdev Controller", 00:07:17.208 "max_namespaces": 32, 00:07:17.208 "min_cntlid": 1, 00:07:17.208 "max_cntlid": 65519, 00:07:17.208 "namespaces": [ 00:07:17.208 { 00:07:17.208 "nsid": 1, 00:07:17.208 "bdev_name": "Null3", 00:07:17.208 "name": "Null3", 00:07:17.208 "nguid": "17F9993C514140218C038D2971FE0C43", 00:07:17.208 "uuid": "17f9993c-5141-4021-8c03-8d2971fe0c43" 00:07:17.208 } 00:07:17.208 ] 00:07:17.208 }, 00:07:17.208 { 00:07:17.208 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:17.208 "subtype": "NVMe", 00:07:17.208 "listen_addresses": [ 00:07:17.208 { 00:07:17.208 "transport": "TCP", 00:07:17.208 "trtype": "TCP", 00:07:17.208 "adrfam": "IPv4", 00:07:17.208 "traddr": "10.0.0.2", 00:07:17.208 "trsvcid": "4420" 00:07:17.208 } 00:07:17.208 ], 00:07:17.208 "allow_any_host": true, 00:07:17.208 "hosts": [], 00:07:17.208 "serial_number": "SPDK00000000000004", 00:07:17.209 "model_number": "SPDK bdev Controller", 00:07:17.209 "max_namespaces": 32, 00:07:17.209 "min_cntlid": 1, 00:07:17.209 "max_cntlid": 65519, 00:07:17.209 "namespaces": [ 00:07:17.209 { 00:07:17.209 "nsid": 1, 00:07:17.209 "bdev_name": "Null4", 00:07:17.209 "name": "Null4", 00:07:17.209 "nguid": "49E29E54FC884CB0BC43DC288CE79085", 00:07:17.209 "uuid": "49e29e54-fc88-4cb0-bc43-dc288ce79085" 00:07:17.209 } 00:07:17.209 ] 00:07:17.209 } 00:07:17.209 ] 00:07:17.209 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.209 17:32:38 -- target/discovery.sh@42 -- # seq 1 4 00:07:17.209 17:32:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.209 17:32:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.209 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.209 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.209 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.209 17:32:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:17.209 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.209 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.209 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.209 17:32:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.209 17:32:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:17.209 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.209 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.209 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.209 17:32:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:17.209 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.209 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.469 17:32:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.469 17:32:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:17.469 17:32:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:17.469 17:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.469 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 17:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.469 17:32:38 -- target/discovery.sh@49 -- # check_bdevs= 00:07:17.469 17:32:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:17.469 17:32:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:17.469 17:32:38 -- target/discovery.sh@57 -- # nvmftestfini 00:07:17.469 17:32:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:17.469 17:32:38 -- nvmf/common.sh@116 -- # sync 00:07:17.469 17:32:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:17.469 17:32:38 -- nvmf/common.sh@119 -- # set +e 00:07:17.469 17:32:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:17.469 17:32:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:17.469 rmmod nvme_tcp 00:07:17.469 rmmod nvme_fabrics 00:07:17.469 rmmod nvme_keyring 00:07:17.469 17:32:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:17.469 17:32:38 -- nvmf/common.sh@123 -- # set -e 00:07:17.469 17:32:38 -- nvmf/common.sh@124 -- # return 0 00:07:17.469 17:32:38 -- nvmf/common.sh@477 -- # '[' -n 465525 ']' 00:07:17.469 17:32:38 -- nvmf/common.sh@478 -- # killprocess 465525 00:07:17.469 17:32:38 -- common/autotest_common.sh@926 -- # '[' -z 465525 ']' 00:07:17.469 17:32:38 -- common/autotest_common.sh@930 -- # kill -0 465525 00:07:17.469 17:32:38 -- common/autotest_common.sh@931 -- # uname 00:07:17.469 17:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.469 17:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 465525 00:07:17.469 17:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.469 17:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.469 17:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 465525' 00:07:17.469 killing process with pid 465525 00:07:17.469 17:32:38 -- common/autotest_common.sh@945 -- # kill 465525 00:07:17.469 [2024-07-24 17:32:38.991417] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:17.469 17:32:38 -- common/autotest_common.sh@950 -- # wait 465525 00:07:17.729 17:32:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:17.729 17:32:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:17.729 17:32:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:17.729 17:32:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.729 17:32:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:17.729 17:32:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.729 17:32:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.729 17:32:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.267 17:32:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:20.267 00:07:20.267 real 0m9.260s 00:07:20.267 user 0m7.157s 00:07:20.267 sys 0m4.494s 00:07:20.267 17:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.267 17:32:41 -- common/autotest_common.sh@10 -- # set +x 00:07:20.267 ************************************ 00:07:20.267 END TEST nvmf_discovery 00:07:20.267 ************************************ 00:07:20.267 17:32:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:20.267 17:32:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:20.267 17:32:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.267 17:32:41 -- common/autotest_common.sh@10 -- # set +x 00:07:20.267 ************************************ 00:07:20.267 START TEST nvmf_referrals 00:07:20.267 ************************************ 00:07:20.267 17:32:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:20.267 * Looking for test storage... 00:07:20.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.267 17:32:41 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.267 17:32:41 -- nvmf/common.sh@7 -- # uname -s 00:07:20.267 17:32:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.267 17:32:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.267 17:32:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.267 17:32:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.267 17:32:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.267 17:32:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.267 17:32:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.267 17:32:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.267 17:32:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.267 17:32:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.267 17:32:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.267 17:32:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.267 17:32:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.267 17:32:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.267 17:32:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.267 17:32:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.267 17:32:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.267 17:32:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.267 17:32:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.267 17:32:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.267 17:32:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.267 17:32:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.267 17:32:41 -- paths/export.sh@5 -- # export PATH 00:07:20.267 17:32:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.267 17:32:41 -- nvmf/common.sh@46 -- # : 0 00:07:20.267 17:32:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.267 17:32:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.267 17:32:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.267 17:32:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.267 17:32:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.267 17:32:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.267 17:32:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.267 17:32:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.267 17:32:41 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:20.267 17:32:41 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:20.267 17:32:41 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:20.267 17:32:41 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:20.267 17:32:41 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:20.267 17:32:41 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:20.267 17:32:41 -- target/referrals.sh@37 -- # nvmftestinit 00:07:20.267 17:32:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:20.267 17:32:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.267 17:32:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:20.267 17:32:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:20.267 17:32:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:20.267 17:32:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.267 17:32:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.267 17:32:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.267 17:32:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:20.267 17:32:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:20.267 17:32:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:20.267 17:32:41 -- common/autotest_common.sh@10 -- # set +x 00:07:25.545 17:32:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:25.545 17:32:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:25.545 17:32:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:25.545 17:32:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:25.545 17:32:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:25.545 17:32:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:25.545 17:32:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:25.545 17:32:46 -- nvmf/common.sh@294 -- # net_devs=() 00:07:25.545 17:32:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:25.545 17:32:46 -- nvmf/common.sh@295 -- # e810=() 00:07:25.545 17:32:46 -- nvmf/common.sh@295 -- # local -ga e810 00:07:25.545 17:32:46 -- nvmf/common.sh@296 -- # x722=() 00:07:25.545 17:32:46 -- nvmf/common.sh@296 -- # local -ga x722 00:07:25.545 17:32:46 -- nvmf/common.sh@297 -- # mlx=() 00:07:25.545 17:32:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:25.545 17:32:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.545 17:32:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.546 17:32:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:25.546 17:32:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:25.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:25.546 17:32:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:25.546 17:32:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:25.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:25.546 17:32:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:25.546 17:32:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.546 17:32:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.546 17:32:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:25.546 Found net devices under 0000:86:00.0: cvl_0_0 00:07:25.546 17:32:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:25.546 17:32:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.546 17:32:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.546 17:32:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:25.546 Found net devices under 0000:86:00.1: cvl_0_1 00:07:25.546 17:32:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:25.546 17:32:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:25.546 17:32:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.546 17:32:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.546 17:32:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:25.546 17:32:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.546 17:32:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.546 17:32:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:25.546 17:32:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.546 17:32:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.546 17:32:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:25.546 17:32:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:25.546 17:32:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.546 17:32:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.546 17:32:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.546 17:32:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.546 17:32:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:25.546 17:32:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.546 17:32:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.546 17:32:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.546 17:32:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:25.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:07:25.546 00:07:25.546 --- 10.0.0.2 ping statistics --- 00:07:25.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.546 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:25.546 17:32:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:25.546 00:07:25.546 --- 10.0.0.1 ping statistics --- 00:07:25.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.546 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:25.546 17:32:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.546 17:32:46 -- nvmf/common.sh@410 -- # return 0 00:07:25.546 17:32:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:25.546 17:32:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.546 17:32:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:25.546 17:32:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.546 17:32:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:25.546 17:32:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:25.546 17:32:46 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:25.546 17:32:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:25.546 17:32:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.546 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.546 17:32:46 -- nvmf/common.sh@469 -- # nvmfpid=469318 00:07:25.546 17:32:46 -- nvmf/common.sh@470 -- # waitforlisten 469318 00:07:25.546 17:32:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.546 17:32:46 -- common/autotest_common.sh@819 -- # '[' -z 469318 ']' 00:07:25.546 17:32:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.546 17:32:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:25.546 17:32:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.546 17:32:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:25.546 17:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.546 [2024-07-24 17:32:46.487056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.546 [2024-07-24 17:32:46.487116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.546 [2024-07-24 17:32:46.543950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.546 [2024-07-24 17:32:46.615416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.546 [2024-07-24 17:32:46.615546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.546 [2024-07-24 17:32:46.615554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.546 [2024-07-24 17:32:46.615561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.546 [2024-07-24 17:32:46.615605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.546 [2024-07-24 17:32:46.615629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.546 [2024-07-24 17:32:46.615693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.546 [2024-07-24 17:32:46.615694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.806 17:32:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.806 17:32:47 -- common/autotest_common.sh@852 -- # return 0 00:07:25.806 17:32:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:25.806 17:32:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 17:32:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.806 17:32:47 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 [2024-07-24 17:32:47.328368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.806 17:32:47 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 [2024-07-24 17:32:47.341760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.806 17:32:47 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.806 17:32:47 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.806 17:32:47 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.806 17:32:47 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.806 17:32:47 -- target/referrals.sh@48 -- # jq length 00:07:25.806 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.806 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.806 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:26.066 17:32:47 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:26.066 17:32:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:26.066 17:32:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.066 17:32:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:26.066 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.066 17:32:47 -- target/referrals.sh@21 -- # sort 00:07:26.066 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.066 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:26.066 17:32:47 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:26.066 17:32:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.066 17:32:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.066 17:32:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.066 17:32:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.066 17:32:47 -- target/referrals.sh@26 -- # sort 00:07:26.066 17:32:47 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:26.066 17:32:47 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:26.066 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.066 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.066 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:26.066 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.066 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.066 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.066 17:32:47 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:26.066 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.066 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.326 17:32:47 -- target/referrals.sh@56 -- # jq length 00:07:26.326 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.326 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:26.326 17:32:47 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:26.326 17:32:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.326 17:32:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # sort 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # echo 00:07:26.326 17:32:47 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:26.326 17:32:47 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:26.326 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.326 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:26.326 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.326 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:26.326 17:32:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:26.326 17:32:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.326 17:32:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:26.326 17:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.326 17:32:47 -- target/referrals.sh@21 -- # sort 00:07:26.326 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 17:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:26.326 17:32:47 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:26.326 17:32:47 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:26.326 17:32:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.326 17:32:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # sort 00:07:26.326 17:32:47 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:26.326 17:32:47 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:26.586 17:32:47 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:26.586 17:32:47 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:26.586 17:32:47 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:26.586 17:32:47 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.586 17:32:47 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:26.586 17:32:48 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:26.586 17:32:48 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:26.586 17:32:48 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:26.586 17:32:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:26.586 17:32:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.586 17:32:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:26.586 17:32:48 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:26.586 17:32:48 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:26.586 17:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.586 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.586 17:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.586 17:32:48 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:26.586 17:32:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:26.586 17:32:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.586 17:32:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:26.586 17:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.586 17:32:48 -- target/referrals.sh@21 -- # sort 00:07:26.586 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.586 17:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.845 17:32:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:26.845 17:32:48 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:26.845 17:32:48 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:26.845 17:32:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.845 17:32:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.846 17:32:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.846 17:32:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.846 17:32:48 -- target/referrals.sh@26 -- # sort 00:07:26.846 17:32:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:26.846 17:32:48 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:26.846 17:32:48 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:26.846 17:32:48 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:26.846 17:32:48 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:26.846 17:32:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.846 17:32:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:26.846 17:32:48 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:26.846 17:32:48 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:26.846 17:32:48 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:26.846 17:32:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:26.846 17:32:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:26.846 17:32:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:26.846 17:32:48 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:26.846 17:32:48 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:26.846 17:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.846 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.846 17:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.846 17:32:48 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.846 17:32:48 -- target/referrals.sh@82 -- # jq length 00:07:26.846 17:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.846 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.846 17:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:27.105 17:32:48 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:27.105 17:32:48 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:27.105 17:32:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.105 17:32:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.105 17:32:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.105 17:32:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.105 17:32:48 -- target/referrals.sh@26 -- # sort 00:07:27.105 17:32:48 -- target/referrals.sh@26 -- # echo 00:07:27.105 17:32:48 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:27.105 17:32:48 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:27.105 17:32:48 -- target/referrals.sh@86 -- # nvmftestfini 00:07:27.105 17:32:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:27.105 17:32:48 -- nvmf/common.sh@116 -- # sync 00:07:27.105 17:32:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:27.105 17:32:48 -- nvmf/common.sh@119 -- # set +e 00:07:27.105 17:32:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:27.105 17:32:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:27.105 rmmod nvme_tcp 00:07:27.105 rmmod nvme_fabrics 00:07:27.105 rmmod nvme_keyring 00:07:27.105 17:32:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:27.105 17:32:48 -- nvmf/common.sh@123 -- # set -e 00:07:27.105 17:32:48 -- nvmf/common.sh@124 -- # return 0 00:07:27.105 17:32:48 -- nvmf/common.sh@477 -- # '[' -n 469318 ']' 00:07:27.105 17:32:48 -- nvmf/common.sh@478 -- # killprocess 469318 00:07:27.105 17:32:48 -- common/autotest_common.sh@926 -- # '[' -z 469318 ']' 00:07:27.105 17:32:48 -- common/autotest_common.sh@930 -- # kill -0 469318 00:07:27.105 17:32:48 -- common/autotest_common.sh@931 -- # uname 00:07:27.105 17:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:27.105 17:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 469318 00:07:27.105 17:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:27.105 17:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:27.105 17:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 469318' 00:07:27.105 killing process with pid 469318 00:07:27.105 17:32:48 -- common/autotest_common.sh@945 -- # kill 469318 00:07:27.105 17:32:48 -- common/autotest_common.sh@950 -- # wait 469318 00:07:27.365 17:32:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:27.365 17:32:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:27.365 17:32:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:27.365 17:32:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.365 17:32:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:27.365 17:32:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.365 17:32:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.365 17:32:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.906 17:32:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:29.906 00:07:29.906 real 0m9.637s 00:07:29.906 user 0m11.085s 00:07:29.906 sys 0m4.353s 00:07:29.906 17:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.906 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:29.906 ************************************ 00:07:29.906 END TEST nvmf_referrals 00:07:29.906 ************************************ 00:07:29.906 17:32:50 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.906 17:32:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:29.906 17:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.906 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:29.906 ************************************ 00:07:29.906 START TEST nvmf_connect_disconnect 00:07:29.906 ************************************ 00:07:29.906 17:32:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.906 * Looking for test storage... 00:07:29.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.906 17:32:51 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.906 17:32:51 -- nvmf/common.sh@7 -- # uname -s 00:07:29.906 17:32:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.906 17:32:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.906 17:32:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.906 17:32:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.906 17:32:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.906 17:32:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.906 17:32:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.906 17:32:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.906 17:32:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.906 17:32:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.906 17:32:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.906 17:32:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.906 17:32:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.906 17:32:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.906 17:32:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.906 17:32:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.906 17:32:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.906 17:32:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.906 17:32:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.906 17:32:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.906 17:32:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.906 17:32:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.906 17:32:51 -- paths/export.sh@5 -- # export PATH 00:07:29.906 17:32:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.906 17:32:51 -- nvmf/common.sh@46 -- # : 0 00:07:29.906 17:32:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:29.906 17:32:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:29.906 17:32:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:29.906 17:32:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.906 17:32:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.906 17:32:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:29.906 17:32:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:29.906 17:32:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:29.906 17:32:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.906 17:32:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.906 17:32:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:29.906 17:32:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:29.906 17:32:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.906 17:32:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:29.906 17:32:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:29.906 17:32:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:29.906 17:32:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.906 17:32:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.906 17:32:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.906 17:32:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:29.906 17:32:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:29.906 17:32:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:29.906 17:32:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.225 17:32:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:35.225 17:32:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:35.225 17:32:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:35.225 17:32:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:35.225 17:32:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:35.225 17:32:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:35.225 17:32:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:35.225 17:32:56 -- nvmf/common.sh@294 -- # net_devs=() 00:07:35.225 17:32:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:35.225 17:32:56 -- nvmf/common.sh@295 -- # e810=() 00:07:35.225 17:32:56 -- nvmf/common.sh@295 -- # local -ga e810 00:07:35.225 17:32:56 -- nvmf/common.sh@296 -- # x722=() 00:07:35.225 17:32:56 -- nvmf/common.sh@296 -- # local -ga x722 00:07:35.225 17:32:56 -- nvmf/common.sh@297 -- # mlx=() 00:07:35.225 17:32:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:35.225 17:32:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.225 17:32:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:35.225 17:32:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:35.225 17:32:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:35.225 17:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:35.225 17:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.225 17:32:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:35.225 17:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.225 17:32:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:35.225 17:32:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:35.225 17:32:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:35.226 17:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:35.226 17:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.226 17:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:35.226 17:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.226 17:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.226 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.226 17:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.226 17:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:35.226 17:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.226 17:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:35.226 17:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.226 17:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.226 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.226 17:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.226 17:32:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:35.226 17:32:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:35.226 17:32:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:35.226 17:32:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:35.226 17:32:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:35.226 17:32:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.226 17:32:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.226 17:32:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.226 17:32:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:35.226 17:32:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.226 17:32:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.226 17:32:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:35.226 17:32:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.226 17:32:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.226 17:32:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:35.226 17:32:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:35.226 17:32:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.226 17:32:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.226 17:32:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.226 17:32:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.226 17:32:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:35.226 17:32:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.226 17:32:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.226 17:32:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.226 17:32:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:35.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:07:35.226 00:07:35.226 --- 10.0.0.2 ping statistics --- 00:07:35.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.226 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:35.226 17:32:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:07:35.226 00:07:35.226 --- 10.0.0.1 ping statistics --- 00:07:35.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.226 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:07:35.226 17:32:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.226 17:32:56 -- nvmf/common.sh@410 -- # return 0 00:07:35.226 17:32:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:35.226 17:32:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.226 17:32:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:35.226 17:32:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:35.226 17:32:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.226 17:32:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:35.226 17:32:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:35.226 17:32:56 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:35.226 17:32:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:35.226 17:32:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:35.226 17:32:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.226 17:32:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.226 17:32:56 -- nvmf/common.sh@469 -- # nvmfpid=473195 00:07:35.226 17:32:56 -- nvmf/common.sh@470 -- # waitforlisten 473195 00:07:35.226 17:32:56 -- common/autotest_common.sh@819 -- # '[' -z 473195 ']' 00:07:35.226 17:32:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.226 17:32:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.226 17:32:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.226 17:32:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.226 17:32:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.226 [2024-07-24 17:32:56.579200] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.226 [2024-07-24 17:32:56.579244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.226 [2024-07-24 17:32:56.637927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.226 [2024-07-24 17:32:56.716248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.226 [2024-07-24 17:32:56.716357] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.226 [2024-07-24 17:32:56.716365] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.226 [2024-07-24 17:32:56.716371] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.226 [2024-07-24 17:32:56.716412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.226 [2024-07-24 17:32:56.716436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.226 [2024-07-24 17:32:56.716501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.226 [2024-07-24 17:32:56.716502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.794 17:32:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.794 17:32:57 -- common/autotest_common.sh@852 -- # return 0 00:07:35.794 17:32:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:35.794 17:32:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:35.794 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 17:32:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:36.053 17:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.053 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 [2024-07-24 17:32:57.420384] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.053 17:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:36.053 17:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.053 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 17:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.053 17:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.053 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 17:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.053 17:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.053 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 17:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.053 17:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.053 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.053 [2024-07-24 17:32:57.472234] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.053 17:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:36.053 17:32:57 -- target/connect_disconnect.sh@34 -- # set +x 00:07:38.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.194 17:36:46 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:25.194 17:36:46 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:25.194 17:36:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:25.194 17:36:46 -- nvmf/common.sh@116 -- # sync 00:11:25.194 17:36:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:25.194 17:36:46 -- nvmf/common.sh@119 -- # set +e 00:11:25.194 17:36:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:25.194 17:36:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:25.194 rmmod nvme_tcp 00:11:25.194 rmmod nvme_fabrics 00:11:25.194 rmmod nvme_keyring 00:11:25.194 17:36:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:25.194 17:36:46 -- nvmf/common.sh@123 -- # set -e 00:11:25.194 17:36:46 -- nvmf/common.sh@124 -- # return 0 00:11:25.194 17:36:46 -- nvmf/common.sh@477 -- # '[' -n 473195 ']' 00:11:25.194 17:36:46 -- nvmf/common.sh@478 -- # killprocess 473195 00:11:25.194 17:36:46 -- common/autotest_common.sh@926 -- # '[' -z 473195 ']' 00:11:25.194 17:36:46 -- common/autotest_common.sh@930 -- # kill -0 473195 00:11:25.194 17:36:46 -- common/autotest_common.sh@931 -- # uname 00:11:25.194 17:36:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:25.194 17:36:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 473195 00:11:25.194 17:36:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:25.194 17:36:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:25.194 17:36:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 473195' 00:11:25.194 killing process with pid 473195 00:11:25.194 17:36:46 -- common/autotest_common.sh@945 -- # kill 473195 00:11:25.194 17:36:46 -- common/autotest_common.sh@950 -- # wait 473195 00:11:25.454 17:36:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:25.454 17:36:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:25.454 17:36:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:25.454 17:36:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.454 17:36:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:25.454 17:36:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.454 17:36:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.454 17:36:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.363 17:36:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:27.363 00:11:27.363 real 3m57.899s 00:11:27.363 user 15m13.522s 00:11:27.363 sys 0m16.933s 00:11:27.363 17:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.363 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.363 ************************************ 00:11:27.363 END TEST nvmf_connect_disconnect 00:11:27.363 ************************************ 00:11:27.363 17:36:48 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:27.363 17:36:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:27.363 17:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.363 17:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.363 ************************************ 00:11:27.363 START TEST nvmf_multitarget 00:11:27.363 ************************************ 00:11:27.363 17:36:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:27.623 * Looking for test storage... 00:11:27.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.623 17:36:49 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.623 17:36:49 -- nvmf/common.sh@7 -- # uname -s 00:11:27.623 17:36:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.623 17:36:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.623 17:36:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.623 17:36:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.623 17:36:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.623 17:36:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.623 17:36:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.623 17:36:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.623 17:36:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.623 17:36:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.623 17:36:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.623 17:36:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.623 17:36:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.623 17:36:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.623 17:36:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.623 17:36:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.623 17:36:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.623 17:36:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.623 17:36:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.623 17:36:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.623 17:36:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.623 17:36:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.623 17:36:49 -- paths/export.sh@5 -- # export PATH 00:11:27.623 17:36:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.623 17:36:49 -- nvmf/common.sh@46 -- # : 0 00:11:27.623 17:36:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:27.623 17:36:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:27.623 17:36:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:27.623 17:36:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.623 17:36:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.623 17:36:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:27.624 17:36:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:27.624 17:36:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:27.624 17:36:49 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:27.624 17:36:49 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:27.624 17:36:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:27.624 17:36:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.624 17:36:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:27.624 17:36:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:27.624 17:36:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:27.624 17:36:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.624 17:36:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.624 17:36:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.624 17:36:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:27.624 17:36:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:27.624 17:36:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:27.624 17:36:49 -- common/autotest_common.sh@10 -- # set +x 00:11:32.906 17:36:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.906 17:36:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:32.906 17:36:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:32.906 17:36:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:32.906 17:36:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:32.906 17:36:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:32.906 17:36:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:32.906 17:36:53 -- nvmf/common.sh@294 -- # net_devs=() 00:11:32.906 17:36:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:32.906 17:36:53 -- nvmf/common.sh@295 -- # e810=() 00:11:32.906 17:36:53 -- nvmf/common.sh@295 -- # local -ga e810 00:11:32.906 17:36:53 -- nvmf/common.sh@296 -- # x722=() 00:11:32.906 17:36:53 -- nvmf/common.sh@296 -- # local -ga x722 00:11:32.906 17:36:53 -- nvmf/common.sh@297 -- # mlx=() 00:11:32.906 17:36:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:32.906 17:36:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.906 17:36:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:32.906 17:36:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:32.906 17:36:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:32.906 17:36:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:32.906 17:36:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:32.906 17:36:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:32.906 17:36:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:32.906 17:36:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:32.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:32.906 17:36:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:32.907 17:36:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:32.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:32.907 17:36:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:32.907 17:36:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:32.907 17:36:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.907 17:36:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:32.907 17:36:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.907 17:36:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:32.907 Found net devices under 0000:86:00.0: cvl_0_0 00:11:32.907 17:36:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.907 17:36:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:32.907 17:36:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.907 17:36:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:32.907 17:36:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.907 17:36:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:32.907 Found net devices under 0000:86:00.1: cvl_0_1 00:11:32.907 17:36:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.907 17:36:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:32.907 17:36:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:32.907 17:36:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:32.907 17:36:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.907 17:36:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.907 17:36:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.907 17:36:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:32.907 17:36:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.907 17:36:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.907 17:36:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:32.907 17:36:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.907 17:36:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.907 17:36:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:32.907 17:36:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:32.907 17:36:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.907 17:36:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.907 17:36:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.907 17:36:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.907 17:36:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:32.907 17:36:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.907 17:36:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.907 17:36:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.907 17:36:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:32.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:11:32.907 00:11:32.907 --- 10.0.0.2 ping statistics --- 00:11:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.907 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:32.907 17:36:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:32.907 00:11:32.907 --- 10.0.0.1 ping statistics --- 00:11:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.907 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:32.907 17:36:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.907 17:36:53 -- nvmf/common.sh@410 -- # return 0 00:11:32.907 17:36:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:32.907 17:36:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.907 17:36:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:32.907 17:36:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.907 17:36:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:32.907 17:36:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:32.907 17:36:53 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:32.907 17:36:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:32.907 17:36:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:32.907 17:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.907 17:36:53 -- nvmf/common.sh@469 -- # nvmfpid=517486 00:11:32.907 17:36:53 -- nvmf/common.sh@470 -- # waitforlisten 517486 00:11:32.907 17:36:53 -- common/autotest_common.sh@819 -- # '[' -z 517486 ']' 00:11:32.907 17:36:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.907 17:36:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.907 17:36:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.907 17:36:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.907 17:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.907 17:36:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.907 [2024-07-24 17:36:54.042098] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:32.907 [2024-07-24 17:36:54.042148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.907 [2024-07-24 17:36:54.099530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.907 [2024-07-24 17:36:54.177982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:32.907 [2024-07-24 17:36:54.178098] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.907 [2024-07-24 17:36:54.178106] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.907 [2024-07-24 17:36:54.178113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.907 [2024-07-24 17:36:54.178160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.907 [2024-07-24 17:36:54.178176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.907 [2024-07-24 17:36:54.178276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.907 [2024-07-24 17:36:54.178277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.476 17:36:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.476 17:36:54 -- common/autotest_common.sh@852 -- # return 0 00:11:33.476 17:36:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:33.476 17:36:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:33.476 17:36:54 -- common/autotest_common.sh@10 -- # set +x 00:11:33.476 17:36:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.476 17:36:54 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:33.476 17:36:54 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.476 17:36:54 -- target/multitarget.sh@21 -- # jq length 00:11:33.476 17:36:54 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:33.476 17:36:54 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:33.476 "nvmf_tgt_1" 00:11:33.736 17:36:55 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:33.736 "nvmf_tgt_2" 00:11:33.736 17:36:55 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.736 17:36:55 -- target/multitarget.sh@28 -- # jq length 00:11:33.736 17:36:55 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:33.736 17:36:55 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:33.996 true 00:11:33.996 17:36:55 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:33.996 true 00:11:33.996 17:36:55 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.996 17:36:55 -- target/multitarget.sh@35 -- # jq length 00:11:33.996 17:36:55 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:33.996 17:36:55 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:33.996 17:36:55 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:33.996 17:36:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:33.996 17:36:55 -- nvmf/common.sh@116 -- # sync 00:11:33.996 17:36:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:33.996 17:36:55 -- nvmf/common.sh@119 -- # set +e 00:11:33.996 17:36:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:33.996 17:36:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:33.996 rmmod nvme_tcp 00:11:34.255 rmmod nvme_fabrics 00:11:34.255 rmmod nvme_keyring 00:11:34.255 17:36:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:34.255 17:36:55 -- nvmf/common.sh@123 -- # set -e 00:11:34.255 17:36:55 -- nvmf/common.sh@124 -- # return 0 00:11:34.255 17:36:55 -- nvmf/common.sh@477 -- # '[' -n 517486 ']' 00:11:34.255 17:36:55 -- nvmf/common.sh@478 -- # killprocess 517486 00:11:34.255 17:36:55 -- common/autotest_common.sh@926 -- # '[' -z 517486 ']' 00:11:34.255 17:36:55 -- common/autotest_common.sh@930 -- # kill -0 517486 00:11:34.255 17:36:55 -- common/autotest_common.sh@931 -- # uname 00:11:34.255 17:36:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:34.255 17:36:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 517486 00:11:34.255 17:36:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:34.256 17:36:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:34.256 17:36:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 517486' 00:11:34.256 killing process with pid 517486 00:11:34.256 17:36:55 -- common/autotest_common.sh@945 -- # kill 517486 00:11:34.256 17:36:55 -- common/autotest_common.sh@950 -- # wait 517486 00:11:34.515 17:36:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:34.515 17:36:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:34.515 17:36:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:34.515 17:36:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.515 17:36:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:34.515 17:36:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.515 17:36:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.515 17:36:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.426 17:36:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:36.426 00:11:36.426 real 0m9.032s 00:11:36.426 user 0m8.726s 00:11:36.426 sys 0m4.162s 00:11:36.426 17:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.426 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:11:36.426 ************************************ 00:11:36.426 END TEST nvmf_multitarget 00:11:36.426 ************************************ 00:11:36.426 17:36:57 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:36.426 17:36:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:36.426 17:36:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:36.426 17:36:57 -- common/autotest_common.sh@10 -- # set +x 00:11:36.426 ************************************ 00:11:36.426 START TEST nvmf_rpc 00:11:36.426 ************************************ 00:11:36.426 17:36:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:36.686 * Looking for test storage... 00:11:36.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.686 17:36:58 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.686 17:36:58 -- nvmf/common.sh@7 -- # uname -s 00:11:36.686 17:36:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.686 17:36:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.686 17:36:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.686 17:36:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.686 17:36:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.686 17:36:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.686 17:36:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.686 17:36:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.686 17:36:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.686 17:36:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.686 17:36:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.686 17:36:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.686 17:36:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.686 17:36:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.686 17:36:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.686 17:36:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.686 17:36:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.686 17:36:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.686 17:36:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.686 17:36:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.686 17:36:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.686 17:36:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.686 17:36:58 -- paths/export.sh@5 -- # export PATH 00:11:36.686 17:36:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.686 17:36:58 -- nvmf/common.sh@46 -- # : 0 00:11:36.686 17:36:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:36.686 17:36:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:36.686 17:36:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:36.686 17:36:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.686 17:36:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.686 17:36:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:36.686 17:36:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:36.686 17:36:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:36.686 17:36:58 -- target/rpc.sh@11 -- # loops=5 00:11:36.686 17:36:58 -- target/rpc.sh@23 -- # nvmftestinit 00:11:36.686 17:36:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:36.686 17:36:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.686 17:36:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:36.686 17:36:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:36.686 17:36:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:36.686 17:36:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.686 17:36:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.686 17:36:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.686 17:36:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:36.686 17:36:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:36.686 17:36:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:36.686 17:36:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.018 17:37:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:42.018 17:37:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:42.018 17:37:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:42.018 17:37:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:42.018 17:37:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:42.018 17:37:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:42.018 17:37:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:42.018 17:37:03 -- nvmf/common.sh@294 -- # net_devs=() 00:11:42.018 17:37:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:42.018 17:37:03 -- nvmf/common.sh@295 -- # e810=() 00:11:42.018 17:37:03 -- nvmf/common.sh@295 -- # local -ga e810 00:11:42.018 17:37:03 -- nvmf/common.sh@296 -- # x722=() 00:11:42.018 17:37:03 -- nvmf/common.sh@296 -- # local -ga x722 00:11:42.018 17:37:03 -- nvmf/common.sh@297 -- # mlx=() 00:11:42.018 17:37:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:42.018 17:37:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.018 17:37:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:42.018 17:37:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:42.018 17:37:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:42.018 17:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:42.018 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:42.018 17:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:42.018 17:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:42.018 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:42.018 17:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:42.018 17:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.018 17:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.018 17:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:42.018 Found net devices under 0000:86:00.0: cvl_0_0 00:11:42.018 17:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.018 17:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:42.018 17:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.018 17:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.018 17:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:42.018 Found net devices under 0000:86:00.1: cvl_0_1 00:11:42.018 17:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.018 17:37:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:42.018 17:37:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:42.018 17:37:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:42.018 17:37:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.018 17:37:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.018 17:37:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.018 17:37:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:42.018 17:37:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.018 17:37:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.018 17:37:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:42.018 17:37:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.018 17:37:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.018 17:37:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:42.018 17:37:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:42.018 17:37:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.018 17:37:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.018 17:37:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.018 17:37:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.018 17:37:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:42.018 17:37:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.277 17:37:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.277 17:37:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.277 17:37:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:42.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:11:42.277 00:11:42.277 --- 10.0.0.2 ping statistics --- 00:11:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.277 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:42.277 17:37:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:11:42.277 00:11:42.277 --- 10.0.0.1 ping statistics --- 00:11:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.277 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:42.277 17:37:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.277 17:37:03 -- nvmf/common.sh@410 -- # return 0 00:11:42.277 17:37:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.277 17:37:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.277 17:37:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.277 17:37:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.277 17:37:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.277 17:37:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.277 17:37:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.277 17:37:03 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:42.277 17:37:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.277 17:37:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:42.277 17:37:03 -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 17:37:03 -- nvmf/common.sh@469 -- # nvmfpid=521306 00:11:42.277 17:37:03 -- nvmf/common.sh@470 -- # waitforlisten 521306 00:11:42.277 17:37:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.277 17:37:03 -- common/autotest_common.sh@819 -- # '[' -z 521306 ']' 00:11:42.277 17:37:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.277 17:37:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.277 17:37:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.277 17:37:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.277 17:37:03 -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 [2024-07-24 17:37:03.817761] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:42.277 [2024-07-24 17:37:03.817805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.277 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.277 [2024-07-24 17:37:03.874959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.537 [2024-07-24 17:37:03.950686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.537 [2024-07-24 17:37:03.950800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.537 [2024-07-24 17:37:03.950808] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.537 [2024-07-24 17:37:03.950815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.537 [2024-07-24 17:37:03.950854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.537 [2024-07-24 17:37:03.950955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.537 [2024-07-24 17:37:03.950974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.537 [2024-07-24 17:37:03.950976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.104 17:37:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:43.104 17:37:04 -- common/autotest_common.sh@852 -- # return 0 00:11:43.104 17:37:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.104 17:37:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:43.104 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.104 17:37:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.104 17:37:04 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:43.104 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.104 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.104 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.104 17:37:04 -- target/rpc.sh@26 -- # stats='{ 00:11:43.104 "tick_rate": 2300000000, 00:11:43.104 "poll_groups": [ 00:11:43.104 { 00:11:43.104 "name": "nvmf_tgt_poll_group_0", 00:11:43.104 "admin_qpairs": 0, 00:11:43.104 "io_qpairs": 0, 00:11:43.104 "current_admin_qpairs": 0, 00:11:43.104 "current_io_qpairs": 0, 00:11:43.104 "pending_bdev_io": 0, 00:11:43.104 "completed_nvme_io": 0, 00:11:43.104 "transports": [] 00:11:43.104 }, 00:11:43.104 { 00:11:43.104 "name": "nvmf_tgt_poll_group_1", 00:11:43.104 "admin_qpairs": 0, 00:11:43.104 "io_qpairs": 0, 00:11:43.104 "current_admin_qpairs": 0, 00:11:43.104 "current_io_qpairs": 0, 00:11:43.104 "pending_bdev_io": 0, 00:11:43.104 "completed_nvme_io": 0, 00:11:43.104 "transports": [] 00:11:43.104 }, 00:11:43.104 { 00:11:43.104 "name": "nvmf_tgt_poll_group_2", 00:11:43.104 "admin_qpairs": 0, 00:11:43.104 "io_qpairs": 0, 00:11:43.104 "current_admin_qpairs": 0, 00:11:43.104 "current_io_qpairs": 0, 00:11:43.104 "pending_bdev_io": 0, 00:11:43.104 "completed_nvme_io": 0, 00:11:43.104 "transports": [] 00:11:43.104 }, 00:11:43.104 { 00:11:43.104 "name": "nvmf_tgt_poll_group_3", 00:11:43.104 "admin_qpairs": 0, 00:11:43.104 "io_qpairs": 0, 00:11:43.104 "current_admin_qpairs": 0, 00:11:43.104 "current_io_qpairs": 0, 00:11:43.104 "pending_bdev_io": 0, 00:11:43.104 "completed_nvme_io": 0, 00:11:43.104 "transports": [] 00:11:43.104 } 00:11:43.104 ] 00:11:43.104 }' 00:11:43.104 17:37:04 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:43.104 17:37:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:43.104 17:37:04 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:43.104 17:37:04 -- target/rpc.sh@15 -- # wc -l 00:11:43.364 17:37:04 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:43.364 17:37:04 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:43.364 17:37:04 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:43.364 17:37:04 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.364 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.364 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.364 [2024-07-24 17:37:04.766680] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.364 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.364 17:37:04 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:43.364 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.364 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.364 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.364 17:37:04 -- target/rpc.sh@33 -- # stats='{ 00:11:43.364 "tick_rate": 2300000000, 00:11:43.364 "poll_groups": [ 00:11:43.364 { 00:11:43.364 "name": "nvmf_tgt_poll_group_0", 00:11:43.364 "admin_qpairs": 0, 00:11:43.364 "io_qpairs": 0, 00:11:43.364 "current_admin_qpairs": 0, 00:11:43.364 "current_io_qpairs": 0, 00:11:43.364 "pending_bdev_io": 0, 00:11:43.364 "completed_nvme_io": 0, 00:11:43.364 "transports": [ 00:11:43.364 { 00:11:43.364 "trtype": "TCP" 00:11:43.364 } 00:11:43.364 ] 00:11:43.364 }, 00:11:43.364 { 00:11:43.364 "name": "nvmf_tgt_poll_group_1", 00:11:43.364 "admin_qpairs": 0, 00:11:43.364 "io_qpairs": 0, 00:11:43.364 "current_admin_qpairs": 0, 00:11:43.364 "current_io_qpairs": 0, 00:11:43.364 "pending_bdev_io": 0, 00:11:43.364 "completed_nvme_io": 0, 00:11:43.364 "transports": [ 00:11:43.364 { 00:11:43.364 "trtype": "TCP" 00:11:43.364 } 00:11:43.364 ] 00:11:43.364 }, 00:11:43.364 { 00:11:43.364 "name": "nvmf_tgt_poll_group_2", 00:11:43.364 "admin_qpairs": 0, 00:11:43.364 "io_qpairs": 0, 00:11:43.364 "current_admin_qpairs": 0, 00:11:43.364 "current_io_qpairs": 0, 00:11:43.364 "pending_bdev_io": 0, 00:11:43.364 "completed_nvme_io": 0, 00:11:43.364 "transports": [ 00:11:43.364 { 00:11:43.364 "trtype": "TCP" 00:11:43.364 } 00:11:43.364 ] 00:11:43.364 }, 00:11:43.364 { 00:11:43.364 "name": "nvmf_tgt_poll_group_3", 00:11:43.364 "admin_qpairs": 0, 00:11:43.364 "io_qpairs": 0, 00:11:43.364 "current_admin_qpairs": 0, 00:11:43.364 "current_io_qpairs": 0, 00:11:43.364 "pending_bdev_io": 0, 00:11:43.364 "completed_nvme_io": 0, 00:11:43.364 "transports": [ 00:11:43.364 { 00:11:43.364 "trtype": "TCP" 00:11:43.364 } 00:11:43.364 ] 00:11:43.364 } 00:11:43.364 ] 00:11:43.364 }' 00:11:43.364 17:37:04 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:43.364 17:37:04 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:43.364 17:37:04 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:43.364 17:37:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:43.364 17:37:04 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:43.364 17:37:04 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:43.365 17:37:04 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:43.365 17:37:04 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:43.365 17:37:04 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 Malloc1 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 [2024-07-24 17:37:04.914891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:43.365 17:37:04 -- common/autotest_common.sh@640 -- # local es=0 00:11:43.365 17:37:04 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:43.365 17:37:04 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:43.365 17:37:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.365 17:37:04 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:43.365 17:37:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.365 17:37:04 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:43.365 17:37:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.365 17:37:04 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:43.365 17:37:04 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:43.365 17:37:04 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:43.365 [2024-07-24 17:37:04.947690] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:43.365 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:43.365 could not add new controller: failed to write to nvme-fabrics device 00:11:43.365 17:37:04 -- common/autotest_common.sh@643 -- # es=1 00:11:43.365 17:37:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:43.365 17:37:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:43.365 17:37:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:43.365 17:37:04 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:43.365 17:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.365 17:37:04 -- common/autotest_common.sh@10 -- # set +x 00:11:43.365 17:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.365 17:37:04 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.741 17:37:06 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.741 17:37:06 -- common/autotest_common.sh@1177 -- # local i=0 00:11:44.741 17:37:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.741 17:37:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:44.741 17:37:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:46.646 17:37:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:46.646 17:37:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:46.646 17:37:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.646 17:37:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:46.646 17:37:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.646 17:37:08 -- common/autotest_common.sh@1187 -- # return 0 00:11:46.646 17:37:08 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.646 17:37:08 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.646 17:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.646 17:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:46.646 17:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.646 17:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:46.646 17:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.646 17:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:11:46.646 17:37:08 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.646 17:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.646 17:37:08 -- common/autotest_common.sh@10 -- # set +x 00:11:46.646 17:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.646 17:37:08 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.646 17:37:08 -- common/autotest_common.sh@640 -- # local es=0 00:11:46.646 17:37:08 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.646 17:37:08 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:46.646 17:37:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.646 17:37:08 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:46.646 17:37:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.646 17:37:08 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:46.646 17:37:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.646 17:37:08 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:46.646 17:37:08 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:46.646 17:37:08 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.646 [2024-07-24 17:37:08.183462] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:46.646 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:46.646 could not add new controller: failed to write to nvme-fabrics device 00:11:46.647 17:37:08 -- common/autotest_common.sh@643 -- # es=1 00:11:46.647 17:37:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:46.647 17:37:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:46.647 17:37:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:46.647 17:37:08 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:46.647 17:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.647 17:37:08 -- common/autotest_common.sh@10 -- # set +x 00:11:46.647 17:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.647 17:37:08 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.023 17:37:09 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.023 17:37:09 -- common/autotest_common.sh@1177 -- # local i=0 00:11:48.023 17:37:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.023 17:37:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:48.023 17:37:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:49.928 17:37:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:49.928 17:37:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:49.928 17:37:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.928 17:37:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:49.928 17:37:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.928 17:37:11 -- common/autotest_common.sh@1187 -- # return 0 00:11:49.928 17:37:11 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.928 17:37:11 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.928 17:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:11:49.928 17:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:49.928 17:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.928 17:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:49.928 17:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.928 17:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:11:49.928 17:37:11 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.928 17:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.928 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 17:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.928 17:37:11 -- target/rpc.sh@81 -- # seq 1 5 00:11:49.928 17:37:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.928 17:37:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.928 17:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.928 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 17:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.928 17:37:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.928 17:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.928 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 [2024-07-24 17:37:11.499097] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.928 17:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.928 17:37:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.928 17:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.928 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 17:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.928 17:37:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.928 17:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.928 17:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 17:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.928 17:37:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.312 17:37:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.312 17:37:12 -- common/autotest_common.sh@1177 -- # local i=0 00:11:51.312 17:37:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.312 17:37:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:51.312 17:37:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:53.218 17:37:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:53.218 17:37:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:53.218 17:37:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.218 17:37:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:53.218 17:37:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.218 17:37:14 -- common/autotest_common.sh@1187 -- # return 0 00:11:53.218 17:37:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.218 17:37:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.218 17:37:14 -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.218 17:37:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:53.218 17:37:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.218 17:37:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:53.218 17:37:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.218 17:37:14 -- common/autotest_common.sh@1210 -- # return 0 00:11:53.218 17:37:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.218 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.218 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.218 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.218 17:37:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.218 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.218 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.218 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.218 17:37:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.218 17:37:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.218 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.218 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.477 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.477 17:37:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.477 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.477 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.477 [2024-07-24 17:37:14.821265] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.477 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.477 17:37:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.477 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.477 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.477 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.477 17:37:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.477 17:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.477 17:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:53.477 17:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.477 17:37:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.417 17:37:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.417 17:37:16 -- common/autotest_common.sh@1177 -- # local i=0 00:11:54.417 17:37:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.417 17:37:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:54.417 17:37:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:56.953 17:37:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:56.953 17:37:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:56.953 17:37:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.953 17:37:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:56.953 17:37:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.953 17:37:18 -- common/autotest_common.sh@1187 -- # return 0 00:11:56.953 17:37:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.953 17:37:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.953 17:37:18 -- common/autotest_common.sh@1198 -- # local i=0 00:11:56.953 17:37:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:56.953 17:37:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.953 17:37:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:56.953 17:37:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.953 17:37:18 -- common/autotest_common.sh@1210 -- # return 0 00:11:56.953 17:37:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.953 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.953 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.953 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.953 17:37:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.953 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.953 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.953 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.953 17:37:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.953 17:37:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.953 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.953 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.953 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.953 17:37:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.953 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.953 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.954 [2024-07-24 17:37:18.187269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.954 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.954 17:37:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.954 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.954 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.954 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.954 17:37:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.954 17:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:56.954 17:37:18 -- common/autotest_common.sh@10 -- # set +x 00:11:56.954 17:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:56.954 17:37:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.892 17:37:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.892 17:37:19 -- common/autotest_common.sh@1177 -- # local i=0 00:11:57.892 17:37:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.892 17:37:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:57.892 17:37:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:59.798 17:37:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:59.798 17:37:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:59.798 17:37:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.057 17:37:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:00.057 17:37:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.057 17:37:21 -- common/autotest_common.sh@1187 -- # return 0 00:12:00.057 17:37:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.057 17:37:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.057 17:37:21 -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.057 17:37:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:00.057 17:37:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.057 17:37:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:00.057 17:37:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.057 17:37:21 -- common/autotest_common.sh@1210 -- # return 0 00:12:00.057 17:37:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.058 17:37:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 [2024-07-24 17:37:21.526466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.058 17:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.058 17:37:21 -- common/autotest_common.sh@10 -- # set +x 00:12:00.058 17:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.058 17:37:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.438 17:37:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.438 17:37:22 -- common/autotest_common.sh@1177 -- # local i=0 00:12:01.438 17:37:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.438 17:37:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:01.438 17:37:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:03.388 17:37:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:03.388 17:37:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:03.388 17:37:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.388 17:37:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:03.388 17:37:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.388 17:37:24 -- common/autotest_common.sh@1187 -- # return 0 00:12:03.389 17:37:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.389 17:37:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.389 17:37:24 -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.389 17:37:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:03.389 17:37:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.389 17:37:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:03.389 17:37:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.389 17:37:24 -- common/autotest_common.sh@1210 -- # return 0 00:12:03.389 17:37:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.389 17:37:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 [2024-07-24 17:37:24.864980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.389 17:37:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.389 17:37:24 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 17:37:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.389 17:37:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.775 17:37:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.775 17:37:25 -- common/autotest_common.sh@1177 -- # local i=0 00:12:04.775 17:37:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.775 17:37:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:04.775 17:37:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:06.680 17:37:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:06.680 17:37:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:06.680 17:37:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.680 17:37:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:06.680 17:37:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.680 17:37:27 -- common/autotest_common.sh@1187 -- # return 0 00:12:06.680 17:37:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.680 17:37:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.680 17:37:28 -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.680 17:37:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:06.680 17:37:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.680 17:37:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:06.680 17:37:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.680 17:37:28 -- common/autotest_common.sh@1210 -- # return 0 00:12:06.680 17:37:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@99 -- # seq 1 5 00:12:06.680 17:37:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.680 17:37:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 [2024-07-24 17:37:28.164319] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.680 17:37:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 [2024-07-24 17:37:28.212420] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.680 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.680 17:37:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.680 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.680 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.681 17:37:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 [2024-07-24 17:37:28.260565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.681 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.681 17:37:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.681 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.681 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.940 17:37:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 [2024-07-24 17:37:28.312748] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.940 17:37:28 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 [2024-07-24 17:37:28.360918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:06.940 17:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.940 17:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.940 17:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.940 17:37:28 -- target/rpc.sh@110 -- # stats='{ 00:12:06.940 "tick_rate": 2300000000, 00:12:06.940 "poll_groups": [ 00:12:06.940 { 00:12:06.940 "name": "nvmf_tgt_poll_group_0", 00:12:06.940 "admin_qpairs": 2, 00:12:06.940 "io_qpairs": 168, 00:12:06.940 "current_admin_qpairs": 0, 00:12:06.940 "current_io_qpairs": 0, 00:12:06.940 "pending_bdev_io": 0, 00:12:06.940 "completed_nvme_io": 269, 00:12:06.940 "transports": [ 00:12:06.940 { 00:12:06.940 "trtype": "TCP" 00:12:06.940 } 00:12:06.940 ] 00:12:06.940 }, 00:12:06.940 { 00:12:06.940 "name": "nvmf_tgt_poll_group_1", 00:12:06.940 "admin_qpairs": 2, 00:12:06.940 "io_qpairs": 168, 00:12:06.940 "current_admin_qpairs": 0, 00:12:06.940 "current_io_qpairs": 0, 00:12:06.940 "pending_bdev_io": 0, 00:12:06.940 "completed_nvme_io": 218, 00:12:06.940 "transports": [ 00:12:06.940 { 00:12:06.940 "trtype": "TCP" 00:12:06.940 } 00:12:06.940 ] 00:12:06.940 }, 00:12:06.940 { 00:12:06.940 "name": "nvmf_tgt_poll_group_2", 00:12:06.940 "admin_qpairs": 1, 00:12:06.940 "io_qpairs": 168, 00:12:06.940 "current_admin_qpairs": 0, 00:12:06.940 "current_io_qpairs": 0, 00:12:06.940 "pending_bdev_io": 0, 00:12:06.940 "completed_nvme_io": 219, 00:12:06.940 "transports": [ 00:12:06.940 { 00:12:06.940 "trtype": "TCP" 00:12:06.940 } 00:12:06.940 ] 00:12:06.940 }, 00:12:06.940 { 00:12:06.940 "name": "nvmf_tgt_poll_group_3", 00:12:06.940 "admin_qpairs": 2, 00:12:06.940 "io_qpairs": 168, 00:12:06.940 "current_admin_qpairs": 0, 00:12:06.940 "current_io_qpairs": 0, 00:12:06.940 "pending_bdev_io": 0, 00:12:06.940 "completed_nvme_io": 316, 00:12:06.940 "transports": [ 00:12:06.940 { 00:12:06.940 "trtype": "TCP" 00:12:06.940 } 00:12:06.940 ] 00:12:06.940 } 00:12:06.940 ] 00:12:06.940 }' 00:12:06.940 17:37:28 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.940 17:37:28 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:06.940 17:37:28 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:06.940 17:37:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.940 17:37:28 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:06.940 17:37:28 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:06.940 17:37:28 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:06.940 17:37:28 -- target/rpc.sh@123 -- # nvmftestfini 00:12:06.940 17:37:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:06.940 17:37:28 -- nvmf/common.sh@116 -- # sync 00:12:06.940 17:37:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:06.940 17:37:28 -- nvmf/common.sh@119 -- # set +e 00:12:06.940 17:37:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:06.940 17:37:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:06.940 rmmod nvme_tcp 00:12:06.940 rmmod nvme_fabrics 00:12:07.200 rmmod nvme_keyring 00:12:07.200 17:37:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:07.200 17:37:28 -- nvmf/common.sh@123 -- # set -e 00:12:07.200 17:37:28 -- nvmf/common.sh@124 -- # return 0 00:12:07.200 17:37:28 -- nvmf/common.sh@477 -- # '[' -n 521306 ']' 00:12:07.200 17:37:28 -- nvmf/common.sh@478 -- # killprocess 521306 00:12:07.200 17:37:28 -- common/autotest_common.sh@926 -- # '[' -z 521306 ']' 00:12:07.200 17:37:28 -- common/autotest_common.sh@930 -- # kill -0 521306 00:12:07.200 17:37:28 -- common/autotest_common.sh@931 -- # uname 00:12:07.200 17:37:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.200 17:37:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 521306 00:12:07.200 17:37:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.200 17:37:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.200 17:37:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 521306' 00:12:07.200 killing process with pid 521306 00:12:07.200 17:37:28 -- common/autotest_common.sh@945 -- # kill 521306 00:12:07.200 17:37:28 -- common/autotest_common.sh@950 -- # wait 521306 00:12:07.460 17:37:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:07.460 17:37:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:07.460 17:37:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:07.460 17:37:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.460 17:37:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:07.460 17:37:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.460 17:37:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.460 17:37:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.366 17:37:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:09.366 00:12:09.366 real 0m32.905s 00:12:09.366 user 1m40.700s 00:12:09.366 sys 0m5.809s 00:12:09.366 17:37:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.366 17:37:30 -- common/autotest_common.sh@10 -- # set +x 00:12:09.366 ************************************ 00:12:09.366 END TEST nvmf_rpc 00:12:09.366 ************************************ 00:12:09.366 17:37:30 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:09.366 17:37:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:09.366 17:37:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.366 17:37:30 -- common/autotest_common.sh@10 -- # set +x 00:12:09.366 ************************************ 00:12:09.366 START TEST nvmf_invalid 00:12:09.366 ************************************ 00:12:09.366 17:37:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:09.625 * Looking for test storage... 00:12:09.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.625 17:37:31 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.625 17:37:31 -- nvmf/common.sh@7 -- # uname -s 00:12:09.625 17:37:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.625 17:37:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.625 17:37:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.625 17:37:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.625 17:37:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.625 17:37:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.625 17:37:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.625 17:37:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.625 17:37:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.625 17:37:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.625 17:37:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:09.625 17:37:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:09.625 17:37:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.625 17:37:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.625 17:37:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.625 17:37:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.625 17:37:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.625 17:37:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.625 17:37:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.625 17:37:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.625 17:37:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.625 17:37:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.625 17:37:31 -- paths/export.sh@5 -- # export PATH 00:12:09.625 17:37:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.625 17:37:31 -- nvmf/common.sh@46 -- # : 0 00:12:09.625 17:37:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:09.625 17:37:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:09.625 17:37:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:09.625 17:37:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.625 17:37:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.625 17:37:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:09.625 17:37:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:09.625 17:37:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:09.625 17:37:31 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:09.625 17:37:31 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.625 17:37:31 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:09.625 17:37:31 -- target/invalid.sh@14 -- # target=foobar 00:12:09.625 17:37:31 -- target/invalid.sh@16 -- # RANDOM=0 00:12:09.625 17:37:31 -- target/invalid.sh@34 -- # nvmftestinit 00:12:09.625 17:37:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:09.625 17:37:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.625 17:37:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:09.625 17:37:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:09.625 17:37:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:09.625 17:37:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.625 17:37:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.625 17:37:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.625 17:37:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:09.625 17:37:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:09.625 17:37:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:09.625 17:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:14.896 17:37:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:14.896 17:37:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:14.896 17:37:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:14.896 17:37:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:14.896 17:37:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:14.896 17:37:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:14.896 17:37:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:14.896 17:37:35 -- nvmf/common.sh@294 -- # net_devs=() 00:12:14.896 17:37:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:14.896 17:37:35 -- nvmf/common.sh@295 -- # e810=() 00:12:14.896 17:37:35 -- nvmf/common.sh@295 -- # local -ga e810 00:12:14.896 17:37:35 -- nvmf/common.sh@296 -- # x722=() 00:12:14.896 17:37:35 -- nvmf/common.sh@296 -- # local -ga x722 00:12:14.896 17:37:35 -- nvmf/common.sh@297 -- # mlx=() 00:12:14.896 17:37:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:14.896 17:37:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.896 17:37:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:14.896 17:37:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:14.896 17:37:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:14.896 17:37:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:14.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:14.896 17:37:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:14.896 17:37:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:14.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:14.896 17:37:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:14.896 17:37:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.896 17:37:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.896 17:37:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:14.896 Found net devices under 0000:86:00.0: cvl_0_0 00:12:14.896 17:37:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.896 17:37:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:14.896 17:37:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.896 17:37:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.896 17:37:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:14.896 Found net devices under 0000:86:00.1: cvl_0_1 00:12:14.896 17:37:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.896 17:37:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:14.896 17:37:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:14.896 17:37:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:14.896 17:37:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.896 17:37:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.896 17:37:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.896 17:37:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:14.896 17:37:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.896 17:37:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.896 17:37:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:14.896 17:37:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.896 17:37:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.896 17:37:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:14.896 17:37:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:14.896 17:37:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.896 17:37:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.896 17:37:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.896 17:37:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.896 17:37:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:14.896 17:37:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.896 17:37:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.896 17:37:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.896 17:37:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:14.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:12:14.896 00:12:14.896 --- 10.0.0.2 ping statistics --- 00:12:14.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.896 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:14.897 17:37:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:12:14.897 00:12:14.897 --- 10.0.0.1 ping statistics --- 00:12:14.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.897 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:14.897 17:37:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.897 17:37:35 -- nvmf/common.sh@410 -- # return 0 00:12:14.897 17:37:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:14.897 17:37:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.897 17:37:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:14.897 17:37:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:14.897 17:37:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.897 17:37:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:14.897 17:37:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:14.897 17:37:36 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:14.897 17:37:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:14.897 17:37:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:14.897 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.897 17:37:36 -- nvmf/common.sh@469 -- # nvmfpid=528983 00:12:14.897 17:37:36 -- nvmf/common.sh@470 -- # waitforlisten 528983 00:12:14.897 17:37:36 -- common/autotest_common.sh@819 -- # '[' -z 528983 ']' 00:12:14.897 17:37:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.897 17:37:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:14.897 17:37:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.897 17:37:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:14.897 17:37:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.897 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.897 [2024-07-24 17:37:36.056087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:14.897 [2024-07-24 17:37:36.056128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.897 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.897 [2024-07-24 17:37:36.113823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.897 [2024-07-24 17:37:36.192187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.897 [2024-07-24 17:37:36.192296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.897 [2024-07-24 17:37:36.192305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.897 [2024-07-24 17:37:36.192312] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.897 [2024-07-24 17:37:36.192357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.897 [2024-07-24 17:37:36.192375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.897 [2024-07-24 17:37:36.192398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.897 [2024-07-24 17:37:36.192399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.465 17:37:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:15.465 17:37:36 -- common/autotest_common.sh@852 -- # return 0 00:12:15.465 17:37:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:15.465 17:37:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:15.465 17:37:36 -- common/autotest_common.sh@10 -- # set +x 00:12:15.465 17:37:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.465 17:37:36 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.465 17:37:36 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1953 00:12:15.465 [2024-07-24 17:37:37.035720] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.725 17:37:37 -- target/invalid.sh@40 -- # out='request: 00:12:15.725 { 00:12:15.725 "nqn": "nqn.2016-06.io.spdk:cnode1953", 00:12:15.725 "tgt_name": "foobar", 00:12:15.725 "method": "nvmf_create_subsystem", 00:12:15.725 "req_id": 1 00:12:15.725 } 00:12:15.725 Got JSON-RPC error response 00:12:15.725 response: 00:12:15.725 { 00:12:15.725 "code": -32603, 00:12:15.725 "message": "Unable to find target foobar" 00:12:15.725 }' 00:12:15.725 17:37:37 -- target/invalid.sh@41 -- # [[ request: 00:12:15.725 { 00:12:15.725 "nqn": "nqn.2016-06.io.spdk:cnode1953", 00:12:15.725 "tgt_name": "foobar", 00:12:15.725 "method": "nvmf_create_subsystem", 00:12:15.725 "req_id": 1 00:12:15.725 } 00:12:15.725 Got JSON-RPC error response 00:12:15.725 response: 00:12:15.725 { 00:12:15.725 "code": -32603, 00:12:15.725 "message": "Unable to find target foobar" 00:12:15.725 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.725 17:37:37 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.725 17:37:37 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9076 00:12:15.725 [2024-07-24 17:37:37.216373] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9076: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:15.725 17:37:37 -- target/invalid.sh@45 -- # out='request: 00:12:15.725 { 00:12:15.725 "nqn": "nqn.2016-06.io.spdk:cnode9076", 00:12:15.725 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.725 "method": "nvmf_create_subsystem", 00:12:15.725 "req_id": 1 00:12:15.725 } 00:12:15.725 Got JSON-RPC error response 00:12:15.725 response: 00:12:15.725 { 00:12:15.725 "code": -32602, 00:12:15.725 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.725 }' 00:12:15.725 17:37:37 -- target/invalid.sh@46 -- # [[ request: 00:12:15.725 { 00:12:15.725 "nqn": "nqn.2016-06.io.spdk:cnode9076", 00:12:15.725 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.725 "method": "nvmf_create_subsystem", 00:12:15.725 "req_id": 1 00:12:15.725 } 00:12:15.725 Got JSON-RPC error response 00:12:15.725 response: 00:12:15.725 { 00:12:15.725 "code": -32602, 00:12:15.725 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.725 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.725 17:37:37 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:15.725 17:37:37 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12339 00:12:15.985 [2024-07-24 17:37:37.392928] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12339: invalid model number 'SPDK_Controller' 00:12:15.985 17:37:37 -- target/invalid.sh@50 -- # out='request: 00:12:15.985 { 00:12:15.985 "nqn": "nqn.2016-06.io.spdk:cnode12339", 00:12:15.985 "model_number": "SPDK_Controller\u001f", 00:12:15.985 "method": "nvmf_create_subsystem", 00:12:15.985 "req_id": 1 00:12:15.985 } 00:12:15.985 Got JSON-RPC error response 00:12:15.985 response: 00:12:15.985 { 00:12:15.985 "code": -32602, 00:12:15.985 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.985 }' 00:12:15.985 17:37:37 -- target/invalid.sh@51 -- # [[ request: 00:12:15.985 { 00:12:15.985 "nqn": "nqn.2016-06.io.spdk:cnode12339", 00:12:15.985 "model_number": "SPDK_Controller\u001f", 00:12:15.985 "method": "nvmf_create_subsystem", 00:12:15.985 "req_id": 1 00:12:15.985 } 00:12:15.985 Got JSON-RPC error response 00:12:15.985 response: 00:12:15.985 { 00:12:15.985 "code": -32602, 00:12:15.985 "message": "Invalid MN SPDK_Controller\u001f" 00:12:15.985 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:15.985 17:37:37 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:15.985 17:37:37 -- target/invalid.sh@19 -- # local length=21 ll 00:12:15.985 17:37:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:15.985 17:37:37 -- target/invalid.sh@21 -- # local chars 00:12:15.985 17:37:37 -- target/invalid.sh@22 -- # local string 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 83 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=S 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 103 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=g 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 53 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=5 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 87 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=W 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 43 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=+ 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 116 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=t 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 85 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=U 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 46 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=. 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 40 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+='(' 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 85 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=U 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 99 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=c 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 73 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=I 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 33 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+='!' 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 69 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=E 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 65 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=A 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 34 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+='"' 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 73 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=I 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 41 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=')' 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # printf %x 87 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:15.985 17:37:37 -- target/invalid.sh@25 -- # string+=W 00:12:15.985 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.986 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # printf %x 123 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # string+='{' 00:12:15.986 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.986 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # printf %x 37 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:15.986 17:37:37 -- target/invalid.sh@25 -- # string+=% 00:12:15.986 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:15.986 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:15.986 17:37:37 -- target/invalid.sh@28 -- # [[ S == \- ]] 00:12:15.986 17:37:37 -- target/invalid.sh@31 -- # echo 'Sg5W+tU.(UcI!EA"I)W{%' 00:12:15.986 17:37:37 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Sg5W+tU.(UcI!EA"I)W{%' nqn.2016-06.io.spdk:cnode14953 00:12:16.245 [2024-07-24 17:37:37.709986] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14953: invalid serial number 'Sg5W+tU.(UcI!EA"I)W{%' 00:12:16.245 17:37:37 -- target/invalid.sh@54 -- # out='request: 00:12:16.245 { 00:12:16.245 "nqn": "nqn.2016-06.io.spdk:cnode14953", 00:12:16.245 "serial_number": "Sg5W+tU.(UcI!EA\"I)W{%", 00:12:16.245 "method": "nvmf_create_subsystem", 00:12:16.245 "req_id": 1 00:12:16.245 } 00:12:16.245 Got JSON-RPC error response 00:12:16.245 response: 00:12:16.245 { 00:12:16.245 "code": -32602, 00:12:16.245 "message": "Invalid SN Sg5W+tU.(UcI!EA\"I)W{%" 00:12:16.245 }' 00:12:16.245 17:37:37 -- target/invalid.sh@55 -- # [[ request: 00:12:16.245 { 00:12:16.245 "nqn": "nqn.2016-06.io.spdk:cnode14953", 00:12:16.245 "serial_number": "Sg5W+tU.(UcI!EA\"I)W{%", 00:12:16.245 "method": "nvmf_create_subsystem", 00:12:16.245 "req_id": 1 00:12:16.245 } 00:12:16.245 Got JSON-RPC error response 00:12:16.245 response: 00:12:16.245 { 00:12:16.245 "code": -32602, 00:12:16.245 "message": "Invalid SN Sg5W+tU.(UcI!EA\"I)W{%" 00:12:16.245 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.245 17:37:37 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:16.245 17:37:37 -- target/invalid.sh@19 -- # local length=41 ll 00:12:16.245 17:37:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.245 17:37:37 -- target/invalid.sh@21 -- # local chars 00:12:16.245 17:37:37 -- target/invalid.sh@22 -- # local string 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 66 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=B 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 95 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=_ 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 66 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=B 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 92 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+='\' 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 54 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=6 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 47 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=/ 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 82 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=R 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 111 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=o 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 46 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=. 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 83 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=S 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 39 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=\' 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 57 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=9 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 118 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=v 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 71 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=G 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 39 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+=\' 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.246 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # printf %x 40 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:16.246 17:37:37 -- target/invalid.sh@25 -- # string+='(' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 54 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=6 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 86 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=V 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 69 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=E 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 96 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+='`' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 49 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=1 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 118 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=v 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 72 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=H 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 102 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=f 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 38 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+='&' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 81 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=Q 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 59 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=';' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 101 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=e 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 125 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+='}' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 72 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=H 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 46 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=. 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 74 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=J 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 68 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=D 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 93 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=']' 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 84 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=T 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 98 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=b 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 79 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # string+=O 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.506 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.506 17:37:37 -- target/invalid.sh@25 -- # printf %x 87 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # string+=W 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # printf %x 60 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # string+='<' 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # printf %x 53 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # string+=5 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # printf %x 87 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:16.507 17:37:37 -- target/invalid.sh@25 -- # string+=W 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.507 17:37:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.507 17:37:37 -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:16.507 17:37:37 -- target/invalid.sh@31 -- # echo 'B_B\6/Ro.S'\''9vG'\''(6VE`1vHf&Q;e}H.JD]TbOW<5W' 00:12:16.507 17:37:37 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'B_B\6/Ro.S'\''9vG'\''(6VE`1vHf&Q;e}H.JD]TbOW<5W' nqn.2016-06.io.spdk:cnode13105 00:12:16.767 [2024-07-24 17:37:38.151485] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13105: invalid model number 'B_B\6/Ro.S'9vG'(6VE`1vHf&Q;e}H.JD]TbOW<5W' 00:12:16.767 17:37:38 -- target/invalid.sh@58 -- # out='request: 00:12:16.767 { 00:12:16.767 "nqn": "nqn.2016-06.io.spdk:cnode13105", 00:12:16.767 "model_number": "B_B\\6/Ro.S'\''9vG'\''(6VE`1vHf&Q;e}H.JD]TbOW<5W", 00:12:16.767 "method": "nvmf_create_subsystem", 00:12:16.767 "req_id": 1 00:12:16.767 } 00:12:16.767 Got JSON-RPC error response 00:12:16.767 response: 00:12:16.767 { 00:12:16.767 "code": -32602, 00:12:16.767 "message": "Invalid MN B_B\\6/Ro.S'\''9vG'\''(6VE`1vHf&Q;e}H.JD]TbOW<5W" 00:12:16.767 }' 00:12:16.767 17:37:38 -- target/invalid.sh@59 -- # [[ request: 00:12:16.767 { 00:12:16.767 "nqn": "nqn.2016-06.io.spdk:cnode13105", 00:12:16.767 "model_number": "B_B\\6/Ro.S'9vG'(6VE`1vHf&Q;e}H.JD]TbOW<5W", 00:12:16.767 "method": "nvmf_create_subsystem", 00:12:16.767 "req_id": 1 00:12:16.767 } 00:12:16.767 Got JSON-RPC error response 00:12:16.767 response: 00:12:16.767 { 00:12:16.767 "code": -32602, 00:12:16.767 "message": "Invalid MN B_B\\6/Ro.S'9vG'(6VE`1vHf&Q;e}H.JD]TbOW<5W" 00:12:16.767 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.767 17:37:38 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:16.767 [2024-07-24 17:37:38.328161] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.767 17:37:38 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:17.025 17:37:38 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:17.025 17:37:38 -- target/invalid.sh@67 -- # echo '' 00:12:17.025 17:37:38 -- target/invalid.sh@67 -- # head -n 1 00:12:17.025 17:37:38 -- target/invalid.sh@67 -- # IP= 00:12:17.025 17:37:38 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:17.285 [2024-07-24 17:37:38.693463] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:17.285 17:37:38 -- target/invalid.sh@69 -- # out='request: 00:12:17.285 { 00:12:17.285 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.285 "listen_address": { 00:12:17.285 "trtype": "tcp", 00:12:17.285 "traddr": "", 00:12:17.285 "trsvcid": "4421" 00:12:17.285 }, 00:12:17.285 "method": "nvmf_subsystem_remove_listener", 00:12:17.285 "req_id": 1 00:12:17.285 } 00:12:17.285 Got JSON-RPC error response 00:12:17.285 response: 00:12:17.285 { 00:12:17.285 "code": -32602, 00:12:17.285 "message": "Invalid parameters" 00:12:17.285 }' 00:12:17.285 17:37:38 -- target/invalid.sh@70 -- # [[ request: 00:12:17.285 { 00:12:17.285 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.285 "listen_address": { 00:12:17.285 "trtype": "tcp", 00:12:17.285 "traddr": "", 00:12:17.285 "trsvcid": "4421" 00:12:17.285 }, 00:12:17.285 "method": "nvmf_subsystem_remove_listener", 00:12:17.285 "req_id": 1 00:12:17.285 } 00:12:17.285 Got JSON-RPC error response 00:12:17.285 response: 00:12:17.285 { 00:12:17.285 "code": -32602, 00:12:17.285 "message": "Invalid parameters" 00:12:17.285 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:17.285 17:37:38 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2185 -i 0 00:12:17.285 [2024-07-24 17:37:38.874061] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2185: invalid cntlid range [0-65519] 00:12:17.545 17:37:38 -- target/invalid.sh@73 -- # out='request: 00:12:17.545 { 00:12:17.545 "nqn": "nqn.2016-06.io.spdk:cnode2185", 00:12:17.545 "min_cntlid": 0, 00:12:17.545 "method": "nvmf_create_subsystem", 00:12:17.545 "req_id": 1 00:12:17.545 } 00:12:17.545 Got JSON-RPC error response 00:12:17.545 response: 00:12:17.545 { 00:12:17.545 "code": -32602, 00:12:17.545 "message": "Invalid cntlid range [0-65519]" 00:12:17.545 }' 00:12:17.545 17:37:38 -- target/invalid.sh@74 -- # [[ request: 00:12:17.545 { 00:12:17.545 "nqn": "nqn.2016-06.io.spdk:cnode2185", 00:12:17.545 "min_cntlid": 0, 00:12:17.545 "method": "nvmf_create_subsystem", 00:12:17.545 "req_id": 1 00:12:17.545 } 00:12:17.545 Got JSON-RPC error response 00:12:17.545 response: 00:12:17.545 { 00:12:17.545 "code": -32602, 00:12:17.545 "message": "Invalid cntlid range [0-65519]" 00:12:17.545 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.545 17:37:38 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24166 -i 65520 00:12:17.545 [2024-07-24 17:37:39.062724] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24166: invalid cntlid range [65520-65519] 00:12:17.545 17:37:39 -- target/invalid.sh@75 -- # out='request: 00:12:17.545 { 00:12:17.545 "nqn": "nqn.2016-06.io.spdk:cnode24166", 00:12:17.545 "min_cntlid": 65520, 00:12:17.545 "method": "nvmf_create_subsystem", 00:12:17.545 "req_id": 1 00:12:17.545 } 00:12:17.545 Got JSON-RPC error response 00:12:17.545 response: 00:12:17.545 { 00:12:17.545 "code": -32602, 00:12:17.545 "message": "Invalid cntlid range [65520-65519]" 00:12:17.545 }' 00:12:17.545 17:37:39 -- target/invalid.sh@76 -- # [[ request: 00:12:17.545 { 00:12:17.545 "nqn": "nqn.2016-06.io.spdk:cnode24166", 00:12:17.545 "min_cntlid": 65520, 00:12:17.545 "method": "nvmf_create_subsystem", 00:12:17.545 "req_id": 1 00:12:17.545 } 00:12:17.545 Got JSON-RPC error response 00:12:17.545 response: 00:12:17.545 { 00:12:17.545 "code": -32602, 00:12:17.545 "message": "Invalid cntlid range [65520-65519]" 00:12:17.545 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.545 17:37:39 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4174 -I 0 00:12:17.804 [2024-07-24 17:37:39.251394] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4174: invalid cntlid range [1-0] 00:12:17.804 17:37:39 -- target/invalid.sh@77 -- # out='request: 00:12:17.804 { 00:12:17.804 "nqn": "nqn.2016-06.io.spdk:cnode4174", 00:12:17.804 "max_cntlid": 0, 00:12:17.804 "method": "nvmf_create_subsystem", 00:12:17.804 "req_id": 1 00:12:17.804 } 00:12:17.804 Got JSON-RPC error response 00:12:17.804 response: 00:12:17.804 { 00:12:17.804 "code": -32602, 00:12:17.804 "message": "Invalid cntlid range [1-0]" 00:12:17.804 }' 00:12:17.804 17:37:39 -- target/invalid.sh@78 -- # [[ request: 00:12:17.804 { 00:12:17.804 "nqn": "nqn.2016-06.io.spdk:cnode4174", 00:12:17.804 "max_cntlid": 0, 00:12:17.804 "method": "nvmf_create_subsystem", 00:12:17.804 "req_id": 1 00:12:17.804 } 00:12:17.804 Got JSON-RPC error response 00:12:17.804 response: 00:12:17.804 { 00:12:17.804 "code": -32602, 00:12:17.804 "message": "Invalid cntlid range [1-0]" 00:12:17.804 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.804 17:37:39 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8047 -I 65520 00:12:18.063 [2024-07-24 17:37:39.419952] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8047: invalid cntlid range [1-65520] 00:12:18.063 17:37:39 -- target/invalid.sh@79 -- # out='request: 00:12:18.063 { 00:12:18.063 "nqn": "nqn.2016-06.io.spdk:cnode8047", 00:12:18.063 "max_cntlid": 65520, 00:12:18.063 "method": "nvmf_create_subsystem", 00:12:18.063 "req_id": 1 00:12:18.063 } 00:12:18.063 Got JSON-RPC error response 00:12:18.063 response: 00:12:18.063 { 00:12:18.063 "code": -32602, 00:12:18.063 "message": "Invalid cntlid range [1-65520]" 00:12:18.063 }' 00:12:18.063 17:37:39 -- target/invalid.sh@80 -- # [[ request: 00:12:18.063 { 00:12:18.063 "nqn": "nqn.2016-06.io.spdk:cnode8047", 00:12:18.063 "max_cntlid": 65520, 00:12:18.063 "method": "nvmf_create_subsystem", 00:12:18.063 "req_id": 1 00:12:18.063 } 00:12:18.063 Got JSON-RPC error response 00:12:18.063 response: 00:12:18.063 { 00:12:18.063 "code": -32602, 00:12:18.063 "message": "Invalid cntlid range [1-65520]" 00:12:18.063 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.063 17:37:39 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31956 -i 6 -I 5 00:12:18.064 [2024-07-24 17:37:39.600591] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31956: invalid cntlid range [6-5] 00:12:18.064 17:37:39 -- target/invalid.sh@83 -- # out='request: 00:12:18.064 { 00:12:18.064 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:12:18.064 "min_cntlid": 6, 00:12:18.064 "max_cntlid": 5, 00:12:18.064 "method": "nvmf_create_subsystem", 00:12:18.064 "req_id": 1 00:12:18.064 } 00:12:18.064 Got JSON-RPC error response 00:12:18.064 response: 00:12:18.064 { 00:12:18.064 "code": -32602, 00:12:18.064 "message": "Invalid cntlid range [6-5]" 00:12:18.064 }' 00:12:18.064 17:37:39 -- target/invalid.sh@84 -- # [[ request: 00:12:18.064 { 00:12:18.064 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:12:18.064 "min_cntlid": 6, 00:12:18.064 "max_cntlid": 5, 00:12:18.064 "method": "nvmf_create_subsystem", 00:12:18.064 "req_id": 1 00:12:18.064 } 00:12:18.064 Got JSON-RPC error response 00:12:18.064 response: 00:12:18.064 { 00:12:18.064 "code": -32602, 00:12:18.064 "message": "Invalid cntlid range [6-5]" 00:12:18.064 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.064 17:37:39 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:18.323 17:37:39 -- target/invalid.sh@87 -- # out='request: 00:12:18.323 { 00:12:18.323 "name": "foobar", 00:12:18.323 "method": "nvmf_delete_target", 00:12:18.323 "req_id": 1 00:12:18.323 } 00:12:18.323 Got JSON-RPC error response 00:12:18.323 response: 00:12:18.323 { 00:12:18.323 "code": -32602, 00:12:18.323 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:18.323 }' 00:12:18.323 17:37:39 -- target/invalid.sh@88 -- # [[ request: 00:12:18.323 { 00:12:18.323 "name": "foobar", 00:12:18.323 "method": "nvmf_delete_target", 00:12:18.323 "req_id": 1 00:12:18.323 } 00:12:18.323 Got JSON-RPC error response 00:12:18.323 response: 00:12:18.323 { 00:12:18.323 "code": -32602, 00:12:18.323 "message": "The specified target doesn't exist, cannot delete it." 00:12:18.323 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:18.323 17:37:39 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:18.323 17:37:39 -- target/invalid.sh@91 -- # nvmftestfini 00:12:18.323 17:37:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.323 17:37:39 -- nvmf/common.sh@116 -- # sync 00:12:18.323 17:37:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.323 17:37:39 -- nvmf/common.sh@119 -- # set +e 00:12:18.323 17:37:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.323 17:37:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.323 rmmod nvme_tcp 00:12:18.323 rmmod nvme_fabrics 00:12:18.323 rmmod nvme_keyring 00:12:18.323 17:37:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.323 17:37:39 -- nvmf/common.sh@123 -- # set -e 00:12:18.323 17:37:39 -- nvmf/common.sh@124 -- # return 0 00:12:18.323 17:37:39 -- nvmf/common.sh@477 -- # '[' -n 528983 ']' 00:12:18.323 17:37:39 -- nvmf/common.sh@478 -- # killprocess 528983 00:12:18.323 17:37:39 -- common/autotest_common.sh@926 -- # '[' -z 528983 ']' 00:12:18.323 17:37:39 -- common/autotest_common.sh@930 -- # kill -0 528983 00:12:18.323 17:37:39 -- common/autotest_common.sh@931 -- # uname 00:12:18.323 17:37:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:18.323 17:37:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 528983 00:12:18.323 17:37:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:18.323 17:37:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:18.323 17:37:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 528983' 00:12:18.323 killing process with pid 528983 00:12:18.323 17:37:39 -- common/autotest_common.sh@945 -- # kill 528983 00:12:18.323 17:37:39 -- common/autotest_common.sh@950 -- # wait 528983 00:12:18.583 17:37:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.583 17:37:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.583 17:37:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.583 17:37:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.583 17:37:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.583 17:37:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.583 17:37:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.583 17:37:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.120 17:37:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:21.120 00:12:21.120 real 0m11.183s 00:12:21.120 user 0m18.895s 00:12:21.120 sys 0m4.672s 00:12:21.120 17:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.120 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 ************************************ 00:12:21.120 END TEST nvmf_invalid 00:12:21.120 ************************************ 00:12:21.120 17:37:42 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.120 17:37:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:21.120 17:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.120 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 ************************************ 00:12:21.120 START TEST nvmf_abort 00:12:21.120 ************************************ 00:12:21.120 17:37:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.120 * Looking for test storage... 00:12:21.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.120 17:37:42 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.120 17:37:42 -- nvmf/common.sh@7 -- # uname -s 00:12:21.120 17:37:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.120 17:37:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.120 17:37:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.120 17:37:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.120 17:37:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.120 17:37:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.120 17:37:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.120 17:37:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.120 17:37:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.120 17:37:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.120 17:37:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.120 17:37:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.120 17:37:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.120 17:37:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.120 17:37:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.120 17:37:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.120 17:37:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.120 17:37:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.120 17:37:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.120 17:37:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.120 17:37:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.120 17:37:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.120 17:37:42 -- paths/export.sh@5 -- # export PATH 00:12:21.120 17:37:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.120 17:37:42 -- nvmf/common.sh@46 -- # : 0 00:12:21.120 17:37:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.120 17:37:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.120 17:37:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.120 17:37:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.121 17:37:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.121 17:37:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.121 17:37:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.121 17:37:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.121 17:37:42 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.121 17:37:42 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:21.121 17:37:42 -- target/abort.sh@14 -- # nvmftestinit 00:12:21.121 17:37:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.121 17:37:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.121 17:37:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.121 17:37:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.121 17:37:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.121 17:37:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.121 17:37:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.121 17:37:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.121 17:37:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:21.121 17:37:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:21.121 17:37:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:21.121 17:37:42 -- common/autotest_common.sh@10 -- # set +x 00:12:26.399 17:37:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:26.399 17:37:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:26.399 17:37:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:26.399 17:37:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:26.399 17:37:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:26.399 17:37:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:26.399 17:37:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:26.399 17:37:47 -- nvmf/common.sh@294 -- # net_devs=() 00:12:26.399 17:37:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:26.399 17:37:47 -- nvmf/common.sh@295 -- # e810=() 00:12:26.399 17:37:47 -- nvmf/common.sh@295 -- # local -ga e810 00:12:26.399 17:37:47 -- nvmf/common.sh@296 -- # x722=() 00:12:26.399 17:37:47 -- nvmf/common.sh@296 -- # local -ga x722 00:12:26.399 17:37:47 -- nvmf/common.sh@297 -- # mlx=() 00:12:26.399 17:37:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:26.399 17:37:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.399 17:37:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:26.399 17:37:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:26.399 17:37:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.399 17:37:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.399 17:37:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.399 17:37:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.399 17:37:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.399 17:37:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.399 17:37:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.399 17:37:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.399 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.399 17:37:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.399 17:37:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.399 17:37:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.399 17:37:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.399 17:37:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.399 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.399 17:37:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.399 17:37:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:26.399 17:37:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:26.399 17:37:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:26.399 17:37:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.399 17:37:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.399 17:37:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.399 17:37:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:26.399 17:37:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.399 17:37:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.399 17:37:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:26.399 17:37:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.399 17:37:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.399 17:37:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:26.399 17:37:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:26.399 17:37:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.400 17:37:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.400 17:37:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.400 17:37:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.400 17:37:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:26.400 17:37:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.400 17:37:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.400 17:37:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.400 17:37:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:26.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:26.400 00:12:26.400 --- 10.0.0.2 ping statistics --- 00:12:26.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.400 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:26.400 17:37:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:26.400 00:12:26.400 --- 10.0.0.1 ping statistics --- 00:12:26.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.400 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:26.400 17:37:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.400 17:37:47 -- nvmf/common.sh@410 -- # return 0 00:12:26.400 17:37:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:26.400 17:37:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.400 17:37:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:26.400 17:37:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:26.400 17:37:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.400 17:37:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:26.400 17:37:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:26.400 17:37:47 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:26.400 17:37:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:26.400 17:37:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:26.400 17:37:47 -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 17:37:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.400 17:37:47 -- nvmf/common.sh@469 -- # nvmfpid=533394 00:12:26.400 17:37:47 -- nvmf/common.sh@470 -- # waitforlisten 533394 00:12:26.400 17:37:47 -- common/autotest_common.sh@819 -- # '[' -z 533394 ']' 00:12:26.400 17:37:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.400 17:37:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.400 17:37:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.400 17:37:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.400 17:37:47 -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 [2024-07-24 17:37:47.982124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:26.400 [2024-07-24 17:37:47.982167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.659 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.659 [2024-07-24 17:37:48.040372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.659 [2024-07-24 17:37:48.110964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.659 [2024-07-24 17:37:48.111086] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.659 [2024-07-24 17:37:48.111094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.659 [2024-07-24 17:37:48.111100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.659 [2024-07-24 17:37:48.111203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.659 [2024-07-24 17:37:48.111288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.659 [2024-07-24 17:37:48.111289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.229 17:37:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.229 17:37:48 -- common/autotest_common.sh@852 -- # return 0 00:12:27.229 17:37:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:27.229 17:37:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:27.229 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.229 17:37:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.229 17:37:48 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:27.229 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.229 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.229 [2024-07-24 17:37:48.824352] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 Malloc0 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 Delay0 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 [2024-07-24 17:37:48.899241] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.489 17:37:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.489 17:37:48 -- common/autotest_common.sh@10 -- # set +x 00:12:27.489 17:37:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.489 17:37:48 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:27.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.489 [2024-07-24 17:37:48.970254] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:30.060 Initializing NVMe Controllers 00:12:30.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:30.060 controller IO queue size 128 less than required 00:12:30.060 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:30.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:30.060 Initialization complete. Launching workers. 00:12:30.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42209 00:12:30.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42270, failed to submit 62 00:12:30.060 success 42209, unsuccess 61, failed 0 00:12:30.060 17:37:51 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:30.060 17:37:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.060 17:37:51 -- common/autotest_common.sh@10 -- # set +x 00:12:30.060 17:37:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.060 17:37:51 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:30.060 17:37:51 -- target/abort.sh@38 -- # nvmftestfini 00:12:30.060 17:37:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.060 17:37:51 -- nvmf/common.sh@116 -- # sync 00:12:30.060 17:37:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.061 17:37:51 -- nvmf/common.sh@119 -- # set +e 00:12:30.061 17:37:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.061 17:37:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.061 rmmod nvme_tcp 00:12:30.061 rmmod nvme_fabrics 00:12:30.061 rmmod nvme_keyring 00:12:30.061 17:37:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.061 17:37:51 -- nvmf/common.sh@123 -- # set -e 00:12:30.061 17:37:51 -- nvmf/common.sh@124 -- # return 0 00:12:30.061 17:37:51 -- nvmf/common.sh@477 -- # '[' -n 533394 ']' 00:12:30.061 17:37:51 -- nvmf/common.sh@478 -- # killprocess 533394 00:12:30.061 17:37:51 -- common/autotest_common.sh@926 -- # '[' -z 533394 ']' 00:12:30.061 17:37:51 -- common/autotest_common.sh@930 -- # kill -0 533394 00:12:30.061 17:37:51 -- common/autotest_common.sh@931 -- # uname 00:12:30.061 17:37:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:30.061 17:37:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 533394 00:12:30.061 17:37:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:30.061 17:37:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:30.061 17:37:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 533394' 00:12:30.061 killing process with pid 533394 00:12:30.061 17:37:51 -- common/autotest_common.sh@945 -- # kill 533394 00:12:30.061 17:37:51 -- common/autotest_common.sh@950 -- # wait 533394 00:12:30.061 17:37:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.061 17:37:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.061 17:37:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.061 17:37:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.061 17:37:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.061 17:37:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.061 17:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.061 17:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.976 17:37:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:31.976 00:12:31.976 real 0m11.340s 00:12:31.976 user 0m13.192s 00:12:31.976 sys 0m5.170s 00:12:31.976 17:37:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.976 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:12:31.976 ************************************ 00:12:31.976 END TEST nvmf_abort 00:12:31.976 ************************************ 00:12:31.976 17:37:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.976 17:37:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:31.976 17:37:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.976 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:12:31.976 ************************************ 00:12:31.976 START TEST nvmf_ns_hotplug_stress 00:12:31.976 ************************************ 00:12:31.976 17:37:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:32.238 * Looking for test storage... 00:12:32.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.238 17:37:53 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.238 17:37:53 -- nvmf/common.sh@7 -- # uname -s 00:12:32.238 17:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.238 17:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.238 17:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.238 17:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.238 17:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.238 17:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.238 17:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.238 17:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.238 17:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.238 17:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.238 17:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.238 17:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.238 17:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.238 17:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.238 17:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.238 17:37:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.238 17:37:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.238 17:37:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.238 17:37:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.238 17:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.238 17:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.238 17:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.238 17:37:53 -- paths/export.sh@5 -- # export PATH 00:12:32.238 17:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.238 17:37:53 -- nvmf/common.sh@46 -- # : 0 00:12:32.238 17:37:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.238 17:37:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.238 17:37:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.238 17:37:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.238 17:37:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.239 17:37:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.239 17:37:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.239 17:37:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.239 17:37:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.239 17:37:53 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:32.239 17:37:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.239 17:37:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.239 17:37:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.239 17:37:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.239 17:37:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.239 17:37:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.239 17:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.239 17:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.239 17:37:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:32.239 17:37:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:32.239 17:37:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:32.239 17:37:53 -- common/autotest_common.sh@10 -- # set +x 00:12:37.517 17:37:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.517 17:37:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:37.517 17:37:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:37.517 17:37:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:37.517 17:37:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:37.517 17:37:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:37.517 17:37:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:37.517 17:37:58 -- nvmf/common.sh@294 -- # net_devs=() 00:12:37.517 17:37:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:37.517 17:37:58 -- nvmf/common.sh@295 -- # e810=() 00:12:37.517 17:37:58 -- nvmf/common.sh@295 -- # local -ga e810 00:12:37.517 17:37:58 -- nvmf/common.sh@296 -- # x722=() 00:12:37.517 17:37:58 -- nvmf/common.sh@296 -- # local -ga x722 00:12:37.517 17:37:58 -- nvmf/common.sh@297 -- # mlx=() 00:12:37.517 17:37:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:37.517 17:37:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.517 17:37:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.517 17:37:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.517 17:37:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.518 17:37:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:37.518 17:37:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:37.518 17:37:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:37.518 17:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.518 17:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:37.518 17:37:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.518 17:37:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:37.518 17:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.518 17:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.518 17:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.518 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.518 17:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.518 17:37:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:37.518 17:37:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.518 17:37:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.518 17:37:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.518 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.518 17:37:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.518 17:37:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:37.518 17:37:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:37.518 17:37:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:37.518 17:37:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.518 17:37:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.518 17:37:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.518 17:37:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:37.518 17:37:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.518 17:37:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.518 17:37:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:37.518 17:37:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.518 17:37:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.518 17:37:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:37.518 17:37:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:37.518 17:37:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.518 17:37:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.518 17:37:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.518 17:37:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.518 17:37:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:37.518 17:37:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.518 17:37:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.518 17:37:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.518 17:37:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:37.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:37.518 00:12:37.518 --- 10.0.0.2 ping statistics --- 00:12:37.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.518 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:37.518 17:37:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:12:37.518 00:12:37.518 --- 10.0.0.1 ping statistics --- 00:12:37.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.518 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:12:37.518 17:37:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.519 17:37:58 -- nvmf/common.sh@410 -- # return 0 00:12:37.519 17:37:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.519 17:37:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.519 17:37:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.519 17:37:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.519 17:37:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.519 17:37:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.519 17:37:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.519 17:37:58 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:37.519 17:37:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.519 17:37:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:37.519 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 17:37:58 -- nvmf/common.sh@469 -- # nvmfpid=537360 00:12:37.519 17:37:58 -- nvmf/common.sh@470 -- # waitforlisten 537360 00:12:37.519 17:37:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:37.519 17:37:58 -- common/autotest_common.sh@819 -- # '[' -z 537360 ']' 00:12:37.519 17:37:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.519 17:37:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.519 17:37:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.519 17:37:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.519 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 [2024-07-24 17:37:58.747329] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:37.519 [2024-07-24 17:37:58.747373] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.519 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.519 [2024-07-24 17:37:58.802546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.519 [2024-07-24 17:37:58.872748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.519 [2024-07-24 17:37:58.872863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.519 [2024-07-24 17:37:58.872870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.519 [2024-07-24 17:37:58.872877] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.519 [2024-07-24 17:37:58.872975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.519 [2024-07-24 17:37:58.873069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.519 [2024-07-24 17:37:58.873072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.089 17:37:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:38.089 17:37:59 -- common/autotest_common.sh@852 -- # return 0 00:12:38.089 17:37:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:38.089 17:37:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:38.089 17:37:59 -- common/autotest_common.sh@10 -- # set +x 00:12:38.089 17:37:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.089 17:37:59 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:38.089 17:37:59 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.347 [2024-07-24 17:37:59.750325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.347 17:37:59 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.606 17:37:59 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.606 [2024-07-24 17:38:00.119715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.606 17:38:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.867 17:38:00 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:39.158 Malloc0 00:12:39.158 17:38:00 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.158 Delay0 00:12:39.158 17:38:00 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.418 17:38:00 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:39.418 NULL1 00:12:39.677 17:38:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:39.677 17:38:01 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=537706 00:12:39.677 17:38:01 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:39.677 17:38:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:39.677 17:38:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.935 17:38:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.195 17:38:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:40.195 17:38:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:40.195 true 00:12:40.195 17:38:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:40.195 17:38:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.455 17:38:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.714 17:38:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:40.714 17:38:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:40.714 true 00:12:40.714 17:38:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:40.714 17:38:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.974 17:38:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.234 17:38:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:41.234 17:38:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:41.234 true 00:12:41.234 17:38:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:41.234 17:38:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.493 17:38:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.752 17:38:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:41.752 17:38:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:41.752 true 00:12:41.752 17:38:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:41.752 17:38:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.011 17:38:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.270 17:38:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:42.270 17:38:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:42.270 true 00:12:42.270 17:38:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:42.270 17:38:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.530 17:38:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.530 17:38:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:42.530 17:38:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:42.788 true 00:12:42.788 17:38:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:42.788 17:38:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.047 17:38:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.306 17:38:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:43.306 17:38:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:43.306 true 00:12:43.306 17:38:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:43.306 17:38:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.566 17:38:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.826 17:38:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:43.826 17:38:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:43.826 true 00:12:43.826 17:38:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:43.826 17:38:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.085 17:38:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.344 17:38:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:44.344 17:38:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:44.344 true 00:12:44.344 17:38:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:44.344 17:38:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.604 17:38:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.863 17:38:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:44.863 17:38:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:44.863 true 00:12:44.863 17:38:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:44.863 17:38:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.122 17:38:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.381 17:38:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:45.381 17:38:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:45.381 true 00:12:45.381 17:38:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:45.381 17:38:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.640 17:38:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.899 17:38:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:45.899 17:38:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:46.156 true 00:12:46.156 17:38:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:46.156 17:38:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.156 17:38:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.413 17:38:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:46.413 17:38:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:46.670 true 00:12:46.670 17:38:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:46.670 17:38:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.670 17:38:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.928 17:38:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:46.928 17:38:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:47.187 true 00:12:47.187 17:38:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:47.187 17:38:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.445 17:38:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.445 17:38:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:47.445 17:38:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:47.703 true 00:12:47.703 17:38:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:47.703 17:38:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.961 17:38:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.961 17:38:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:47.961 17:38:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:48.219 true 00:12:48.220 17:38:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:48.220 17:38:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.479 17:38:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.479 17:38:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:48.479 17:38:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:48.774 true 00:12:48.774 17:38:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:48.774 17:38:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.034 17:38:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.034 17:38:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:49.034 17:38:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:49.292 true 00:12:49.292 17:38:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:49.292 17:38:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.551 17:38:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.551 17:38:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:49.551 17:38:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:49.811 true 00:12:49.811 17:38:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:49.811 17:38:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.071 17:38:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.331 17:38:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:50.331 17:38:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:50.331 true 00:12:50.331 17:38:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:50.331 17:38:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.590 Read completed with error (sct=0, sc=11) 00:12:50.590 17:38:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.849 17:38:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:50.849 17:38:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:50.849 true 00:12:50.849 17:38:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:50.849 17:38:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.787 17:38:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.046 17:38:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:52.046 17:38:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:52.046 true 00:12:52.046 17:38:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:52.046 17:38:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.304 17:38:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.563 17:38:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:52.563 17:38:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:52.563 true 00:12:52.563 17:38:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:52.563 17:38:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 17:38:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.940 17:38:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:53.940 17:38:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:54.199 true 00:12:54.199 17:38:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:54.199 17:38:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.137 17:38:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.137 17:38:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:55.137 17:38:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:55.397 true 00:12:55.397 17:38:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:55.397 17:38:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.657 17:38:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.657 17:38:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:55.657 17:38:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:55.915 true 00:12:55.915 17:38:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:55.915 17:38:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.289 17:38:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.289 17:38:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:57.289 17:38:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:57.547 true 00:12:57.547 17:38:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:57.547 17:38:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.485 17:38:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.485 17:38:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:58.485 17:38:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:58.485 true 00:12:58.744 17:38:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:58.744 17:38:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.744 17:38:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.003 17:38:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:59.003 17:38:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:59.003 true 00:12:59.262 17:38:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:12:59.262 17:38:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.200 17:38:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.458 17:38:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:00.458 17:38:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:00.715 true 00:13:00.715 17:38:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:00.715 17:38:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.651 17:38:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.652 17:38:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:01.652 17:38:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:01.914 true 00:13:01.914 17:38:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:01.914 17:38:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.914 17:38:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.174 17:38:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:02.174 17:38:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:02.434 true 00:13:02.434 17:38:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:02.434 17:38:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.693 17:38:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.693 17:38:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:13:02.693 17:38:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:02.953 true 00:13:02.953 17:38:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:02.953 17:38:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.212 17:38:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.212 17:38:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:13:03.212 17:38:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:03.470 true 00:13:03.470 17:38:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:03.470 17:38:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.845 17:38:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.845 17:38:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:13:04.845 17:38:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:04.845 true 00:13:04.845 17:38:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:04.845 17:38:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.781 17:38:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.041 17:38:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:13:06.041 17:38:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:06.041 true 00:13:06.041 17:38:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:06.041 17:38:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.300 17:38:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.559 17:38:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:13:06.559 17:38:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:06.559 true 00:13:06.830 17:38:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:06.830 17:38:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.788 17:38:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.047 17:38:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:13:08.047 17:38:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:08.305 true 00:13:08.305 17:38:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:08.305 17:38:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.241 17:38:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.241 17:38:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:13:09.241 17:38:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:09.501 true 00:13:09.501 17:38:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:09.501 17:38:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.760 17:38:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.760 17:38:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:13:09.760 17:38:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:10.020 true 00:13:10.020 17:38:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:10.020 17:38:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.279 Initializing NVMe Controllers 00:13:10.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.279 Controller IO queue size 128, less than required. 00:13:10.279 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.279 Controller IO queue size 128, less than required. 00:13:10.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:10.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:10.280 Initialization complete. Launching workers. 00:13:10.280 ======================================================== 00:13:10.280 Latency(us) 00:13:10.280 Device Information : IOPS MiB/s Average min max 00:13:10.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1182.05 0.58 45414.55 1836.41 1083160.27 00:13:10.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11669.41 5.70 10940.15 2079.84 376926.27 00:13:10.280 ======================================================== 00:13:10.280 Total : 12851.46 6.28 14111.03 1836.41 1083160.27 00:13:10.280 00:13:10.280 17:38:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.280 17:38:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:13:10.280 17:38:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:10.539 true 00:13:10.539 17:38:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 537706 00:13:10.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (537706) - No such process 00:13:10.539 17:38:31 -- target/ns_hotplug_stress.sh@53 -- # wait 537706 00:13:10.539 17:38:31 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.798 17:38:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:11.057 null0 00:13:11.057 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.057 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.057 17:38:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:11.316 null1 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:11.316 null2 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.316 17:38:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:11.575 null3 00:13:11.575 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.575 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.575 17:38:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:11.835 null4 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:11.835 null5 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.835 17:38:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:12.095 null6 00:13:12.095 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.095 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.095 17:38:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:12.354 null7 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.354 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@66 -- # wait 543407 543408 543410 543412 543414 543416 543417 543419 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.355 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.614 17:38:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.614 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.873 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.132 17:38:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.392 17:38:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.651 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.911 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.170 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.428 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.428 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.428 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.688 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.946 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.205 17:38:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.466 17:38:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.466 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.726 17:38:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:15.986 17:38:37 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:15.986 17:38:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:15.986 17:38:37 -- nvmf/common.sh@116 -- # sync 00:13:15.986 17:38:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:15.986 17:38:37 -- nvmf/common.sh@119 -- # set +e 00:13:15.986 17:38:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:15.986 17:38:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:15.986 rmmod nvme_tcp 00:13:15.986 rmmod nvme_fabrics 00:13:15.986 rmmod nvme_keyring 00:13:15.986 17:38:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:15.986 17:38:37 -- nvmf/common.sh@123 -- # set -e 00:13:15.986 17:38:37 -- nvmf/common.sh@124 -- # return 0 00:13:15.986 17:38:37 -- nvmf/common.sh@477 -- # '[' -n 537360 ']' 00:13:15.986 17:38:37 -- nvmf/common.sh@478 -- # killprocess 537360 00:13:15.986 17:38:37 -- common/autotest_common.sh@926 -- # '[' -z 537360 ']' 00:13:15.986 17:38:37 -- common/autotest_common.sh@930 -- # kill -0 537360 00:13:15.986 17:38:37 -- common/autotest_common.sh@931 -- # uname 00:13:15.986 17:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:15.986 17:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 537360 00:13:15.986 17:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:15.986 17:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:15.986 17:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 537360' 00:13:15.986 killing process with pid 537360 00:13:15.986 17:38:37 -- common/autotest_common.sh@945 -- # kill 537360 00:13:15.986 17:38:37 -- common/autotest_common.sh@950 -- # wait 537360 00:13:16.245 17:38:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:16.245 17:38:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:16.245 17:38:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:16.245 17:38:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.245 17:38:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:16.245 17:38:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.245 17:38:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.245 17:38:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.783 17:38:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:18.783 00:13:18.783 real 0m46.265s 00:13:18.783 user 3m12.493s 00:13:18.783 sys 0m14.999s 00:13:18.783 17:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.783 17:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.783 ************************************ 00:13:18.783 END TEST nvmf_ns_hotplug_stress 00:13:18.783 ************************************ 00:13:18.783 17:38:39 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.783 17:38:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:18.783 17:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.783 17:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.783 ************************************ 00:13:18.783 START TEST nvmf_connect_stress 00:13:18.783 ************************************ 00:13:18.783 17:38:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.783 * Looking for test storage... 00:13:18.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.783 17:38:39 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.783 17:38:39 -- nvmf/common.sh@7 -- # uname -s 00:13:18.783 17:38:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.783 17:38:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.783 17:38:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.783 17:38:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.783 17:38:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.783 17:38:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.783 17:38:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.783 17:38:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.783 17:38:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.783 17:38:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.783 17:38:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.783 17:38:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.783 17:38:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.783 17:38:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.783 17:38:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.783 17:38:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.783 17:38:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.783 17:38:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.783 17:38:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.783 17:38:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.783 17:38:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.783 17:38:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.783 17:38:39 -- paths/export.sh@5 -- # export PATH 00:13:18.783 17:38:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.783 17:38:39 -- nvmf/common.sh@46 -- # : 0 00:13:18.783 17:38:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:18.783 17:38:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:18.783 17:38:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:18.783 17:38:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.783 17:38:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.783 17:38:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:18.783 17:38:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:18.783 17:38:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:18.783 17:38:39 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:18.783 17:38:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:18.783 17:38:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.783 17:38:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:18.783 17:38:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:18.783 17:38:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:18.783 17:38:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.783 17:38:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.783 17:38:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.783 17:38:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:18.783 17:38:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:18.783 17:38:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:18.783 17:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.139 17:38:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:24.139 17:38:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:24.139 17:38:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:24.139 17:38:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:24.139 17:38:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:24.139 17:38:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:24.139 17:38:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:24.139 17:38:45 -- nvmf/common.sh@294 -- # net_devs=() 00:13:24.139 17:38:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:24.139 17:38:45 -- nvmf/common.sh@295 -- # e810=() 00:13:24.139 17:38:45 -- nvmf/common.sh@295 -- # local -ga e810 00:13:24.139 17:38:45 -- nvmf/common.sh@296 -- # x722=() 00:13:24.139 17:38:45 -- nvmf/common.sh@296 -- # local -ga x722 00:13:24.139 17:38:45 -- nvmf/common.sh@297 -- # mlx=() 00:13:24.139 17:38:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:24.139 17:38:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.139 17:38:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:24.139 17:38:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:24.139 17:38:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.139 17:38:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:24.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:24.139 17:38:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.139 17:38:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:24.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:24.139 17:38:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.139 17:38:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.139 17:38:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.139 17:38:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:24.139 Found net devices under 0000:86:00.0: cvl_0_0 00:13:24.139 17:38:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.139 17:38:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.139 17:38:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.139 17:38:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.139 17:38:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:24.139 Found net devices under 0000:86:00.1: cvl_0_1 00:13:24.139 17:38:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.139 17:38:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:24.139 17:38:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:24.139 17:38:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:24.139 17:38:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.139 17:38:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.139 17:38:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.139 17:38:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:24.139 17:38:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.139 17:38:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.140 17:38:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:24.140 17:38:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.140 17:38:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.140 17:38:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:24.140 17:38:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:24.140 17:38:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.140 17:38:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.140 17:38:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.140 17:38:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.140 17:38:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:24.140 17:38:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.140 17:38:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.140 17:38:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.140 17:38:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:24.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:13:24.140 00:13:24.140 --- 10.0.0.2 ping statistics --- 00:13:24.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.140 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:24.140 17:38:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:13:24.140 00:13:24.140 --- 10.0.0.1 ping statistics --- 00:13:24.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.140 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:13:24.140 17:38:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.140 17:38:45 -- nvmf/common.sh@410 -- # return 0 00:13:24.140 17:38:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:24.140 17:38:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.140 17:38:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:24.140 17:38:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:24.140 17:38:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.140 17:38:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:24.140 17:38:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:24.140 17:38:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:24.140 17:38:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:24.140 17:38:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:24.140 17:38:45 -- common/autotest_common.sh@10 -- # set +x 00:13:24.140 17:38:45 -- nvmf/common.sh@469 -- # nvmfpid=547625 00:13:24.140 17:38:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:24.140 17:38:45 -- nvmf/common.sh@470 -- # waitforlisten 547625 00:13:24.140 17:38:45 -- common/autotest_common.sh@819 -- # '[' -z 547625 ']' 00:13:24.140 17:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.140 17:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:24.140 17:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.140 17:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:24.140 17:38:45 -- common/autotest_common.sh@10 -- # set +x 00:13:24.140 [2024-07-24 17:38:45.693943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:24.140 [2024-07-24 17:38:45.693989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.140 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.400 [2024-07-24 17:38:45.751987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.400 [2024-07-24 17:38:45.824321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.400 [2024-07-24 17:38:45.824453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.400 [2024-07-24 17:38:45.824461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.400 [2024-07-24 17:38:45.824468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.400 [2024-07-24 17:38:45.824589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.400 [2024-07-24 17:38:45.824658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.400 [2024-07-24 17:38:45.824659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.967 17:38:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:24.967 17:38:46 -- common/autotest_common.sh@852 -- # return 0 00:13:24.967 17:38:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:24.967 17:38:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:24.967 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:24.968 17:38:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.968 17:38:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.968 17:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.968 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:24.968 [2024-07-24 17:38:46.545335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.968 17:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.968 17:38:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:24.968 17:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.968 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:24.968 17:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.968 17:38:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.968 17:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.968 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:25.228 [2024-07-24 17:38:46.577175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.228 17:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.228 17:38:46 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:25.228 17:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.228 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:25.228 NULL1 00:13:25.228 17:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.228 17:38:46 -- target/connect_stress.sh@21 -- # PERF_PID=547834 00:13:25.228 17:38:46 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:25.228 17:38:46 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:25.228 17:38:46 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.228 17:38:46 -- target/connect_stress.sh@28 -- # cat 00:13:25.228 17:38:46 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:25.228 17:38:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.228 17:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.229 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:25.486 17:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.486 17:38:47 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:25.486 17:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.486 17:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.486 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:25.745 17:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.745 17:38:47 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:25.745 17:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.745 17:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.745 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:26.312 17:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.312 17:38:47 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:26.312 17:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.312 17:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.312 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:26.571 17:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.571 17:38:47 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:26.571 17:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.571 17:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.571 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:26.830 17:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.830 17:38:48 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:26.830 17:38:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.830 17:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.830 17:38:48 -- common/autotest_common.sh@10 -- # set +x 00:13:27.089 17:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.089 17:38:48 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:27.089 17:38:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.089 17:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.089 17:38:48 -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 17:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.394 17:38:48 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:27.394 17:38:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.394 17:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.395 17:38:48 -- common/autotest_common.sh@10 -- # set +x 00:13:27.962 17:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.962 17:38:49 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:27.962 17:38:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.962 17:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.962 17:38:49 -- common/autotest_common.sh@10 -- # set +x 00:13:28.222 17:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.222 17:38:49 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:28.222 17:38:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.222 17:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.222 17:38:49 -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 17:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.481 17:38:49 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:28.481 17:38:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.481 17:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.481 17:38:49 -- common/autotest_common.sh@10 -- # set +x 00:13:28.741 17:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.741 17:38:50 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:28.741 17:38:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.741 17:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.741 17:38:50 -- common/autotest_common.sh@10 -- # set +x 00:13:29.001 17:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.001 17:38:50 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:29.001 17:38:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.001 17:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.001 17:38:50 -- common/autotest_common.sh@10 -- # set +x 00:13:29.569 17:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.569 17:38:50 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:29.569 17:38:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.569 17:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.569 17:38:50 -- common/autotest_common.sh@10 -- # set +x 00:13:29.828 17:38:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.828 17:38:51 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:29.828 17:38:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.828 17:38:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.828 17:38:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.087 17:38:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.087 17:38:51 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:30.087 17:38:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.087 17:38:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.087 17:38:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.346 17:38:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.346 17:38:51 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:30.346 17:38:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.346 17:38:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.346 17:38:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.605 17:38:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.605 17:38:52 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:30.605 17:38:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.605 17:38:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.605 17:38:52 -- common/autotest_common.sh@10 -- # set +x 00:13:31.173 17:38:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.173 17:38:52 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:31.173 17:38:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.173 17:38:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.173 17:38:52 -- common/autotest_common.sh@10 -- # set +x 00:13:31.432 17:38:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.432 17:38:52 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:31.432 17:38:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.432 17:38:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.432 17:38:52 -- common/autotest_common.sh@10 -- # set +x 00:13:31.692 17:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.692 17:38:53 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:31.692 17:38:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.692 17:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.692 17:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:31.952 17:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.952 17:38:53 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:31.952 17:38:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.952 17:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.952 17:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:32.212 17:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.212 17:38:53 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:32.212 17:38:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.212 17:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.212 17:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:32.780 17:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.780 17:38:54 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:32.780 17:38:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.780 17:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.780 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:33.039 17:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.039 17:38:54 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:33.039 17:38:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.039 17:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.039 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:33.299 17:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.299 17:38:54 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:33.299 17:38:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.299 17:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.299 17:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:33.559 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.559 17:38:55 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:33.559 17:38:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.559 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.559 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.819 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.819 17:38:55 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:33.819 17:38:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.819 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.819 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:13:34.388 17:38:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.388 17:38:55 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:34.388 17:38:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.388 17:38:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.388 17:38:55 -- common/autotest_common.sh@10 -- # set +x 00:13:34.647 17:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.647 17:38:56 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:34.647 17:38:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.647 17:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.647 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:13:34.906 17:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.906 17:38:56 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:34.906 17:38:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.906 17:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.906 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:13:35.165 17:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.165 17:38:56 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:35.165 17:38:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.165 17:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.165 17:38:56 -- common/autotest_common.sh@10 -- # set +x 00:13:35.165 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.424 17:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.424 17:38:56 -- target/connect_stress.sh@34 -- # kill -0 547834 00:13:35.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (547834) - No such process 00:13:35.424 17:38:56 -- target/connect_stress.sh@38 -- # wait 547834 00:13:35.424 17:38:56 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:35.424 17:38:56 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:35.424 17:38:56 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:35.424 17:38:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:35.424 17:38:56 -- nvmf/common.sh@116 -- # sync 00:13:35.424 17:38:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:35.424 17:38:56 -- nvmf/common.sh@119 -- # set +e 00:13:35.424 17:38:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:35.424 17:38:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:35.424 rmmod nvme_tcp 00:13:35.424 rmmod nvme_fabrics 00:13:35.683 rmmod nvme_keyring 00:13:35.683 17:38:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:35.683 17:38:57 -- nvmf/common.sh@123 -- # set -e 00:13:35.684 17:38:57 -- nvmf/common.sh@124 -- # return 0 00:13:35.684 17:38:57 -- nvmf/common.sh@477 -- # '[' -n 547625 ']' 00:13:35.684 17:38:57 -- nvmf/common.sh@478 -- # killprocess 547625 00:13:35.684 17:38:57 -- common/autotest_common.sh@926 -- # '[' -z 547625 ']' 00:13:35.684 17:38:57 -- common/autotest_common.sh@930 -- # kill -0 547625 00:13:35.684 17:38:57 -- common/autotest_common.sh@931 -- # uname 00:13:35.684 17:38:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.684 17:38:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 547625 00:13:35.684 17:38:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:35.684 17:38:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:35.684 17:38:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 547625' 00:13:35.684 killing process with pid 547625 00:13:35.684 17:38:57 -- common/autotest_common.sh@945 -- # kill 547625 00:13:35.684 17:38:57 -- common/autotest_common.sh@950 -- # wait 547625 00:13:35.943 17:38:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:35.943 17:38:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:35.943 17:38:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:35.943 17:38:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.943 17:38:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:35.943 17:38:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.943 17:38:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.943 17:38:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.853 17:38:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:37.853 00:13:37.853 real 0m19.511s 00:13:37.853 user 0m41.698s 00:13:37.853 sys 0m8.194s 00:13:37.853 17:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.853 17:38:59 -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 ************************************ 00:13:37.853 END TEST nvmf_connect_stress 00:13:37.853 ************************************ 00:13:37.853 17:38:59 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:37.853 17:38:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:37.853 17:38:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.853 17:38:59 -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 ************************************ 00:13:37.853 START TEST nvmf_fused_ordering 00:13:37.853 ************************************ 00:13:37.853 17:38:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:38.113 * Looking for test storage... 00:13:38.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.113 17:38:59 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.113 17:38:59 -- nvmf/common.sh@7 -- # uname -s 00:13:38.113 17:38:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.113 17:38:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.113 17:38:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.113 17:38:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.113 17:38:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.113 17:38:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.113 17:38:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.113 17:38:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.113 17:38:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.113 17:38:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.113 17:38:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.113 17:38:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.113 17:38:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.113 17:38:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.113 17:38:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.113 17:38:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.113 17:38:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.113 17:38:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.113 17:38:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.113 17:38:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.113 17:38:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.113 17:38:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.113 17:38:59 -- paths/export.sh@5 -- # export PATH 00:13:38.113 17:38:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.113 17:38:59 -- nvmf/common.sh@46 -- # : 0 00:13:38.113 17:38:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.113 17:38:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.113 17:38:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.113 17:38:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.113 17:38:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.113 17:38:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.113 17:38:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.113 17:38:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.113 17:38:59 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:38.113 17:38:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:38.113 17:38:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.113 17:38:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:38.113 17:38:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:38.113 17:38:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:38.113 17:38:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.113 17:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.113 17:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.113 17:38:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:38.113 17:38:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:38.113 17:38:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:38.113 17:38:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.390 17:39:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.390 17:39:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:43.390 17:39:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:43.390 17:39:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:43.390 17:39:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:43.390 17:39:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:43.390 17:39:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:43.390 17:39:04 -- nvmf/common.sh@294 -- # net_devs=() 00:13:43.390 17:39:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:43.390 17:39:04 -- nvmf/common.sh@295 -- # e810=() 00:13:43.390 17:39:04 -- nvmf/common.sh@295 -- # local -ga e810 00:13:43.390 17:39:04 -- nvmf/common.sh@296 -- # x722=() 00:13:43.390 17:39:04 -- nvmf/common.sh@296 -- # local -ga x722 00:13:43.390 17:39:04 -- nvmf/common.sh@297 -- # mlx=() 00:13:43.390 17:39:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:43.390 17:39:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.390 17:39:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.390 17:39:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.390 17:39:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.390 17:39:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.390 17:39:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.390 17:39:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.390 17:39:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.390 17:39:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.390 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.390 17:39:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.390 17:39:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.390 17:39:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.390 17:39:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.390 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.390 17:39:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:43.390 17:39:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:43.390 17:39:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.390 17:39:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.390 17:39:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:43.390 17:39:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.390 17:39:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.390 17:39:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:43.390 17:39:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.390 17:39:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.390 17:39:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:43.390 17:39:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:43.390 17:39:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.390 17:39:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.390 17:39:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.390 17:39:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.390 17:39:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:43.390 17:39:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.390 17:39:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.390 17:39:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.390 17:39:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:43.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:13:43.390 00:13:43.390 --- 10.0.0.2 ping statistics --- 00:13:43.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.390 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:43.390 17:39:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:13:43.390 00:13:43.390 --- 10.0.0.1 ping statistics --- 00:13:43.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.390 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:43.390 17:39:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.390 17:39:04 -- nvmf/common.sh@410 -- # return 0 00:13:43.390 17:39:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:43.390 17:39:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.390 17:39:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:43.390 17:39:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.390 17:39:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:43.390 17:39:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:43.390 17:39:04 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:43.390 17:39:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:43.390 17:39:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:43.390 17:39:04 -- common/autotest_common.sh@10 -- # set +x 00:13:43.390 17:39:04 -- nvmf/common.sh@469 -- # nvmfpid=553154 00:13:43.390 17:39:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.390 17:39:04 -- nvmf/common.sh@470 -- # waitforlisten 553154 00:13:43.390 17:39:04 -- common/autotest_common.sh@819 -- # '[' -z 553154 ']' 00:13:43.390 17:39:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.390 17:39:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:43.391 17:39:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.391 17:39:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:43.391 17:39:04 -- common/autotest_common.sh@10 -- # set +x 00:13:43.391 [2024-07-24 17:39:04.690538] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:43.391 [2024-07-24 17:39:04.690579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.391 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.391 [2024-07-24 17:39:04.747525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.391 [2024-07-24 17:39:04.817730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.391 [2024-07-24 17:39:04.817839] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.391 [2024-07-24 17:39:04.817847] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.391 [2024-07-24 17:39:04.817852] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.391 [2024-07-24 17:39:04.817869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.959 17:39:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.959 17:39:05 -- common/autotest_common.sh@852 -- # return 0 00:13:43.959 17:39:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:43.959 17:39:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:43.959 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 17:39:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.959 17:39:05 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.959 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.959 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 [2024-07-24 17:39:05.528796] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.959 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.959 17:39:05 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.959 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.959 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.959 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.960 17:39:05 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.960 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.960 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.960 [2024-07-24 17:39:05.544967] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.960 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.960 17:39:05 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.960 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.960 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.960 NULL1 00:13:43.960 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.960 17:39:05 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:43.960 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.960 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.219 17:39:05 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:44.219 17:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.219 17:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 17:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.220 17:39:05 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:44.220 [2024-07-24 17:39:05.597838] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:44.220 [2024-07-24 17:39:05.597879] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553393 ] 00:13:44.220 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.158 Attached to nqn.2016-06.io.spdk:cnode1 00:13:45.158 Namespace ID: 1 size: 1GB 00:13:45.158 fused_ordering(0) 00:13:45.158 fused_ordering(1) 00:13:45.158 fused_ordering(2) 00:13:45.158 fused_ordering(3) 00:13:45.158 fused_ordering(4) 00:13:45.158 fused_ordering(5) 00:13:45.158 fused_ordering(6) 00:13:45.158 fused_ordering(7) 00:13:45.158 fused_ordering(8) 00:13:45.158 fused_ordering(9) 00:13:45.158 fused_ordering(10) 00:13:45.158 fused_ordering(11) 00:13:45.158 fused_ordering(12) 00:13:45.158 fused_ordering(13) 00:13:45.158 fused_ordering(14) 00:13:45.158 fused_ordering(15) 00:13:45.158 fused_ordering(16) 00:13:45.158 fused_ordering(17) 00:13:45.158 fused_ordering(18) 00:13:45.158 fused_ordering(19) 00:13:45.158 fused_ordering(20) 00:13:45.158 fused_ordering(21) 00:13:45.158 fused_ordering(22) 00:13:45.158 fused_ordering(23) 00:13:45.158 fused_ordering(24) 00:13:45.158 fused_ordering(25) 00:13:45.158 fused_ordering(26) 00:13:45.158 fused_ordering(27) 00:13:45.158 fused_ordering(28) 00:13:45.158 fused_ordering(29) 00:13:45.158 fused_ordering(30) 00:13:45.158 fused_ordering(31) 00:13:45.158 fused_ordering(32) 00:13:45.158 fused_ordering(33) 00:13:45.158 fused_ordering(34) 00:13:45.158 fused_ordering(35) 00:13:45.158 fused_ordering(36) 00:13:45.158 fused_ordering(37) 00:13:45.158 fused_ordering(38) 00:13:45.158 fused_ordering(39) 00:13:45.158 fused_ordering(40) 00:13:45.158 fused_ordering(41) 00:13:45.158 fused_ordering(42) 00:13:45.158 fused_ordering(43) 00:13:45.158 fused_ordering(44) 00:13:45.158 fused_ordering(45) 00:13:45.158 fused_ordering(46) 00:13:45.158 fused_ordering(47) 00:13:45.158 fused_ordering(48) 00:13:45.158 fused_ordering(49) 00:13:45.158 fused_ordering(50) 00:13:45.158 fused_ordering(51) 00:13:45.158 fused_ordering(52) 00:13:45.158 fused_ordering(53) 00:13:45.158 fused_ordering(54) 00:13:45.158 fused_ordering(55) 00:13:45.158 fused_ordering(56) 00:13:45.158 fused_ordering(57) 00:13:45.158 fused_ordering(58) 00:13:45.158 fused_ordering(59) 00:13:45.158 fused_ordering(60) 00:13:45.158 fused_ordering(61) 00:13:45.158 fused_ordering(62) 00:13:45.158 fused_ordering(63) 00:13:45.158 fused_ordering(64) 00:13:45.158 fused_ordering(65) 00:13:45.158 fused_ordering(66) 00:13:45.158 fused_ordering(67) 00:13:45.158 fused_ordering(68) 00:13:45.158 fused_ordering(69) 00:13:45.158 fused_ordering(70) 00:13:45.158 fused_ordering(71) 00:13:45.158 fused_ordering(72) 00:13:45.158 fused_ordering(73) 00:13:45.158 fused_ordering(74) 00:13:45.158 fused_ordering(75) 00:13:45.158 fused_ordering(76) 00:13:45.158 fused_ordering(77) 00:13:45.158 fused_ordering(78) 00:13:45.158 fused_ordering(79) 00:13:45.158 fused_ordering(80) 00:13:45.158 fused_ordering(81) 00:13:45.158 fused_ordering(82) 00:13:45.159 fused_ordering(83) 00:13:45.159 fused_ordering(84) 00:13:45.159 fused_ordering(85) 00:13:45.159 fused_ordering(86) 00:13:45.159 fused_ordering(87) 00:13:45.159 fused_ordering(88) 00:13:45.159 fused_ordering(89) 00:13:45.159 fused_ordering(90) 00:13:45.159 fused_ordering(91) 00:13:45.159 fused_ordering(92) 00:13:45.159 fused_ordering(93) 00:13:45.159 fused_ordering(94) 00:13:45.159 fused_ordering(95) 00:13:45.159 fused_ordering(96) 00:13:45.159 fused_ordering(97) 00:13:45.159 fused_ordering(98) 00:13:45.159 fused_ordering(99) 00:13:45.159 fused_ordering(100) 00:13:45.159 fused_ordering(101) 00:13:45.159 fused_ordering(102) 00:13:45.159 fused_ordering(103) 00:13:45.159 fused_ordering(104) 00:13:45.159 fused_ordering(105) 00:13:45.159 fused_ordering(106) 00:13:45.159 fused_ordering(107) 00:13:45.159 fused_ordering(108) 00:13:45.159 fused_ordering(109) 00:13:45.159 fused_ordering(110) 00:13:45.159 fused_ordering(111) 00:13:45.159 fused_ordering(112) 00:13:45.159 fused_ordering(113) 00:13:45.159 fused_ordering(114) 00:13:45.159 fused_ordering(115) 00:13:45.159 fused_ordering(116) 00:13:45.159 fused_ordering(117) 00:13:45.159 fused_ordering(118) 00:13:45.159 fused_ordering(119) 00:13:45.159 fused_ordering(120) 00:13:45.159 fused_ordering(121) 00:13:45.159 fused_ordering(122) 00:13:45.159 fused_ordering(123) 00:13:45.159 fused_ordering(124) 00:13:45.159 fused_ordering(125) 00:13:45.159 fused_ordering(126) 00:13:45.159 fused_ordering(127) 00:13:45.159 fused_ordering(128) 00:13:45.159 fused_ordering(129) 00:13:45.159 fused_ordering(130) 00:13:45.159 fused_ordering(131) 00:13:45.159 fused_ordering(132) 00:13:45.159 fused_ordering(133) 00:13:45.159 fused_ordering(134) 00:13:45.159 fused_ordering(135) 00:13:45.159 fused_ordering(136) 00:13:45.159 fused_ordering(137) 00:13:45.159 fused_ordering(138) 00:13:45.159 fused_ordering(139) 00:13:45.159 fused_ordering(140) 00:13:45.159 fused_ordering(141) 00:13:45.159 fused_ordering(142) 00:13:45.159 fused_ordering(143) 00:13:45.159 fused_ordering(144) 00:13:45.159 fused_ordering(145) 00:13:45.159 fused_ordering(146) 00:13:45.159 fused_ordering(147) 00:13:45.159 fused_ordering(148) 00:13:45.159 fused_ordering(149) 00:13:45.159 fused_ordering(150) 00:13:45.159 fused_ordering(151) 00:13:45.159 fused_ordering(152) 00:13:45.159 fused_ordering(153) 00:13:45.159 fused_ordering(154) 00:13:45.159 fused_ordering(155) 00:13:45.159 fused_ordering(156) 00:13:45.159 fused_ordering(157) 00:13:45.159 fused_ordering(158) 00:13:45.159 fused_ordering(159) 00:13:45.159 fused_ordering(160) 00:13:45.159 fused_ordering(161) 00:13:45.159 fused_ordering(162) 00:13:45.159 fused_ordering(163) 00:13:45.159 fused_ordering(164) 00:13:45.159 fused_ordering(165) 00:13:45.159 fused_ordering(166) 00:13:45.159 fused_ordering(167) 00:13:45.159 fused_ordering(168) 00:13:45.159 fused_ordering(169) 00:13:45.159 fused_ordering(170) 00:13:45.159 fused_ordering(171) 00:13:45.159 fused_ordering(172) 00:13:45.159 fused_ordering(173) 00:13:45.159 fused_ordering(174) 00:13:45.159 fused_ordering(175) 00:13:45.159 fused_ordering(176) 00:13:45.159 fused_ordering(177) 00:13:45.159 fused_ordering(178) 00:13:45.159 fused_ordering(179) 00:13:45.159 fused_ordering(180) 00:13:45.159 fused_ordering(181) 00:13:45.159 fused_ordering(182) 00:13:45.159 fused_ordering(183) 00:13:45.159 fused_ordering(184) 00:13:45.159 fused_ordering(185) 00:13:45.159 fused_ordering(186) 00:13:45.159 fused_ordering(187) 00:13:45.159 fused_ordering(188) 00:13:45.159 fused_ordering(189) 00:13:45.159 fused_ordering(190) 00:13:45.159 fused_ordering(191) 00:13:45.159 fused_ordering(192) 00:13:45.159 fused_ordering(193) 00:13:45.159 fused_ordering(194) 00:13:45.159 fused_ordering(195) 00:13:45.159 fused_ordering(196) 00:13:45.159 fused_ordering(197) 00:13:45.159 fused_ordering(198) 00:13:45.159 fused_ordering(199) 00:13:45.159 fused_ordering(200) 00:13:45.159 fused_ordering(201) 00:13:45.159 fused_ordering(202) 00:13:45.159 fused_ordering(203) 00:13:45.159 fused_ordering(204) 00:13:45.159 fused_ordering(205) 00:13:45.728 fused_ordering(206) 00:13:45.728 fused_ordering(207) 00:13:45.728 fused_ordering(208) 00:13:45.728 fused_ordering(209) 00:13:45.728 fused_ordering(210) 00:13:45.728 fused_ordering(211) 00:13:45.728 fused_ordering(212) 00:13:45.728 fused_ordering(213) 00:13:45.728 fused_ordering(214) 00:13:45.728 fused_ordering(215) 00:13:45.728 fused_ordering(216) 00:13:45.728 fused_ordering(217) 00:13:45.728 fused_ordering(218) 00:13:45.728 fused_ordering(219) 00:13:45.728 fused_ordering(220) 00:13:45.729 fused_ordering(221) 00:13:45.729 fused_ordering(222) 00:13:45.729 fused_ordering(223) 00:13:45.729 fused_ordering(224) 00:13:45.729 fused_ordering(225) 00:13:45.729 fused_ordering(226) 00:13:45.729 fused_ordering(227) 00:13:45.729 fused_ordering(228) 00:13:45.729 fused_ordering(229) 00:13:45.729 fused_ordering(230) 00:13:45.729 fused_ordering(231) 00:13:45.729 fused_ordering(232) 00:13:45.729 fused_ordering(233) 00:13:45.729 fused_ordering(234) 00:13:45.729 fused_ordering(235) 00:13:45.729 fused_ordering(236) 00:13:45.729 fused_ordering(237) 00:13:45.729 fused_ordering(238) 00:13:45.729 fused_ordering(239) 00:13:45.729 fused_ordering(240) 00:13:45.729 fused_ordering(241) 00:13:45.729 fused_ordering(242) 00:13:45.729 fused_ordering(243) 00:13:45.729 fused_ordering(244) 00:13:45.729 fused_ordering(245) 00:13:45.729 fused_ordering(246) 00:13:45.729 fused_ordering(247) 00:13:45.729 fused_ordering(248) 00:13:45.729 fused_ordering(249) 00:13:45.729 fused_ordering(250) 00:13:45.729 fused_ordering(251) 00:13:45.729 fused_ordering(252) 00:13:45.729 fused_ordering(253) 00:13:45.729 fused_ordering(254) 00:13:45.729 fused_ordering(255) 00:13:45.729 fused_ordering(256) 00:13:45.729 fused_ordering(257) 00:13:45.729 fused_ordering(258) 00:13:45.729 fused_ordering(259) 00:13:45.729 fused_ordering(260) 00:13:45.729 fused_ordering(261) 00:13:45.729 fused_ordering(262) 00:13:45.729 fused_ordering(263) 00:13:45.729 fused_ordering(264) 00:13:45.729 fused_ordering(265) 00:13:45.729 fused_ordering(266) 00:13:45.729 fused_ordering(267) 00:13:45.729 fused_ordering(268) 00:13:45.729 fused_ordering(269) 00:13:45.729 fused_ordering(270) 00:13:45.729 fused_ordering(271) 00:13:45.729 fused_ordering(272) 00:13:45.729 fused_ordering(273) 00:13:45.729 fused_ordering(274) 00:13:45.729 fused_ordering(275) 00:13:45.729 fused_ordering(276) 00:13:45.729 fused_ordering(277) 00:13:45.729 fused_ordering(278) 00:13:45.729 fused_ordering(279) 00:13:45.729 fused_ordering(280) 00:13:45.729 fused_ordering(281) 00:13:45.729 fused_ordering(282) 00:13:45.729 fused_ordering(283) 00:13:45.729 fused_ordering(284) 00:13:45.729 fused_ordering(285) 00:13:45.729 fused_ordering(286) 00:13:45.729 fused_ordering(287) 00:13:45.729 fused_ordering(288) 00:13:45.729 fused_ordering(289) 00:13:45.729 fused_ordering(290) 00:13:45.729 fused_ordering(291) 00:13:45.729 fused_ordering(292) 00:13:45.729 fused_ordering(293) 00:13:45.729 fused_ordering(294) 00:13:45.729 fused_ordering(295) 00:13:45.729 fused_ordering(296) 00:13:45.729 fused_ordering(297) 00:13:45.729 fused_ordering(298) 00:13:45.729 fused_ordering(299) 00:13:45.729 fused_ordering(300) 00:13:45.729 fused_ordering(301) 00:13:45.729 fused_ordering(302) 00:13:45.729 fused_ordering(303) 00:13:45.729 fused_ordering(304) 00:13:45.729 fused_ordering(305) 00:13:45.729 fused_ordering(306) 00:13:45.729 fused_ordering(307) 00:13:45.729 fused_ordering(308) 00:13:45.729 fused_ordering(309) 00:13:45.729 fused_ordering(310) 00:13:45.729 fused_ordering(311) 00:13:45.729 fused_ordering(312) 00:13:45.729 fused_ordering(313) 00:13:45.729 fused_ordering(314) 00:13:45.729 fused_ordering(315) 00:13:45.729 fused_ordering(316) 00:13:45.729 fused_ordering(317) 00:13:45.729 fused_ordering(318) 00:13:45.729 fused_ordering(319) 00:13:45.729 fused_ordering(320) 00:13:45.729 fused_ordering(321) 00:13:45.729 fused_ordering(322) 00:13:45.729 fused_ordering(323) 00:13:45.729 fused_ordering(324) 00:13:45.729 fused_ordering(325) 00:13:45.729 fused_ordering(326) 00:13:45.729 fused_ordering(327) 00:13:45.729 fused_ordering(328) 00:13:45.729 fused_ordering(329) 00:13:45.729 fused_ordering(330) 00:13:45.729 fused_ordering(331) 00:13:45.729 fused_ordering(332) 00:13:45.729 fused_ordering(333) 00:13:45.729 fused_ordering(334) 00:13:45.729 fused_ordering(335) 00:13:45.729 fused_ordering(336) 00:13:45.729 fused_ordering(337) 00:13:45.729 fused_ordering(338) 00:13:45.729 fused_ordering(339) 00:13:45.729 fused_ordering(340) 00:13:45.729 fused_ordering(341) 00:13:45.729 fused_ordering(342) 00:13:45.729 fused_ordering(343) 00:13:45.729 fused_ordering(344) 00:13:45.729 fused_ordering(345) 00:13:45.729 fused_ordering(346) 00:13:45.729 fused_ordering(347) 00:13:45.729 fused_ordering(348) 00:13:45.729 fused_ordering(349) 00:13:45.729 fused_ordering(350) 00:13:45.729 fused_ordering(351) 00:13:45.729 fused_ordering(352) 00:13:45.729 fused_ordering(353) 00:13:45.729 fused_ordering(354) 00:13:45.729 fused_ordering(355) 00:13:45.729 fused_ordering(356) 00:13:45.729 fused_ordering(357) 00:13:45.729 fused_ordering(358) 00:13:45.729 fused_ordering(359) 00:13:45.729 fused_ordering(360) 00:13:45.729 fused_ordering(361) 00:13:45.729 fused_ordering(362) 00:13:45.729 fused_ordering(363) 00:13:45.729 fused_ordering(364) 00:13:45.729 fused_ordering(365) 00:13:45.729 fused_ordering(366) 00:13:45.729 fused_ordering(367) 00:13:45.729 fused_ordering(368) 00:13:45.729 fused_ordering(369) 00:13:45.729 fused_ordering(370) 00:13:45.729 fused_ordering(371) 00:13:45.729 fused_ordering(372) 00:13:45.729 fused_ordering(373) 00:13:45.729 fused_ordering(374) 00:13:45.729 fused_ordering(375) 00:13:45.729 fused_ordering(376) 00:13:45.729 fused_ordering(377) 00:13:45.729 fused_ordering(378) 00:13:45.729 fused_ordering(379) 00:13:45.729 fused_ordering(380) 00:13:45.729 fused_ordering(381) 00:13:45.729 fused_ordering(382) 00:13:45.729 fused_ordering(383) 00:13:45.729 fused_ordering(384) 00:13:45.729 fused_ordering(385) 00:13:45.729 fused_ordering(386) 00:13:45.729 fused_ordering(387) 00:13:45.729 fused_ordering(388) 00:13:45.729 fused_ordering(389) 00:13:45.729 fused_ordering(390) 00:13:45.729 fused_ordering(391) 00:13:45.729 fused_ordering(392) 00:13:45.729 fused_ordering(393) 00:13:45.729 fused_ordering(394) 00:13:45.729 fused_ordering(395) 00:13:45.729 fused_ordering(396) 00:13:45.729 fused_ordering(397) 00:13:45.729 fused_ordering(398) 00:13:45.729 fused_ordering(399) 00:13:45.729 fused_ordering(400) 00:13:45.729 fused_ordering(401) 00:13:45.729 fused_ordering(402) 00:13:45.729 fused_ordering(403) 00:13:45.729 fused_ordering(404) 00:13:45.729 fused_ordering(405) 00:13:45.729 fused_ordering(406) 00:13:45.729 fused_ordering(407) 00:13:45.729 fused_ordering(408) 00:13:45.729 fused_ordering(409) 00:13:45.729 fused_ordering(410) 00:13:46.667 fused_ordering(411) 00:13:46.667 fused_ordering(412) 00:13:46.667 fused_ordering(413) 00:13:46.667 fused_ordering(414) 00:13:46.667 fused_ordering(415) 00:13:46.667 fused_ordering(416) 00:13:46.667 fused_ordering(417) 00:13:46.667 fused_ordering(418) 00:13:46.667 fused_ordering(419) 00:13:46.667 fused_ordering(420) 00:13:46.667 fused_ordering(421) 00:13:46.667 fused_ordering(422) 00:13:46.667 fused_ordering(423) 00:13:46.667 fused_ordering(424) 00:13:46.667 fused_ordering(425) 00:13:46.667 fused_ordering(426) 00:13:46.667 fused_ordering(427) 00:13:46.667 fused_ordering(428) 00:13:46.667 fused_ordering(429) 00:13:46.667 fused_ordering(430) 00:13:46.667 fused_ordering(431) 00:13:46.667 fused_ordering(432) 00:13:46.667 fused_ordering(433) 00:13:46.667 fused_ordering(434) 00:13:46.667 fused_ordering(435) 00:13:46.667 fused_ordering(436) 00:13:46.667 fused_ordering(437) 00:13:46.667 fused_ordering(438) 00:13:46.667 fused_ordering(439) 00:13:46.667 fused_ordering(440) 00:13:46.667 fused_ordering(441) 00:13:46.667 fused_ordering(442) 00:13:46.667 fused_ordering(443) 00:13:46.667 fused_ordering(444) 00:13:46.667 fused_ordering(445) 00:13:46.667 fused_ordering(446) 00:13:46.667 fused_ordering(447) 00:13:46.667 fused_ordering(448) 00:13:46.667 fused_ordering(449) 00:13:46.667 fused_ordering(450) 00:13:46.667 fused_ordering(451) 00:13:46.667 fused_ordering(452) 00:13:46.667 fused_ordering(453) 00:13:46.667 fused_ordering(454) 00:13:46.667 fused_ordering(455) 00:13:46.667 fused_ordering(456) 00:13:46.667 fused_ordering(457) 00:13:46.667 fused_ordering(458) 00:13:46.667 fused_ordering(459) 00:13:46.667 fused_ordering(460) 00:13:46.667 fused_ordering(461) 00:13:46.667 fused_ordering(462) 00:13:46.667 fused_ordering(463) 00:13:46.667 fused_ordering(464) 00:13:46.667 fused_ordering(465) 00:13:46.667 fused_ordering(466) 00:13:46.667 fused_ordering(467) 00:13:46.667 fused_ordering(468) 00:13:46.667 fused_ordering(469) 00:13:46.667 fused_ordering(470) 00:13:46.667 fused_ordering(471) 00:13:46.667 fused_ordering(472) 00:13:46.667 fused_ordering(473) 00:13:46.667 fused_ordering(474) 00:13:46.667 fused_ordering(475) 00:13:46.667 fused_ordering(476) 00:13:46.667 fused_ordering(477) 00:13:46.667 fused_ordering(478) 00:13:46.667 fused_ordering(479) 00:13:46.667 fused_ordering(480) 00:13:46.667 fused_ordering(481) 00:13:46.667 fused_ordering(482) 00:13:46.667 fused_ordering(483) 00:13:46.667 fused_ordering(484) 00:13:46.668 fused_ordering(485) 00:13:46.668 fused_ordering(486) 00:13:46.668 fused_ordering(487) 00:13:46.668 fused_ordering(488) 00:13:46.668 fused_ordering(489) 00:13:46.668 fused_ordering(490) 00:13:46.668 fused_ordering(491) 00:13:46.668 fused_ordering(492) 00:13:46.668 fused_ordering(493) 00:13:46.668 fused_ordering(494) 00:13:46.668 fused_ordering(495) 00:13:46.668 fused_ordering(496) 00:13:46.668 fused_ordering(497) 00:13:46.668 fused_ordering(498) 00:13:46.668 fused_ordering(499) 00:13:46.668 fused_ordering(500) 00:13:46.668 fused_ordering(501) 00:13:46.668 fused_ordering(502) 00:13:46.668 fused_ordering(503) 00:13:46.668 fused_ordering(504) 00:13:46.668 fused_ordering(505) 00:13:46.668 fused_ordering(506) 00:13:46.668 fused_ordering(507) 00:13:46.668 fused_ordering(508) 00:13:46.668 fused_ordering(509) 00:13:46.668 fused_ordering(510) 00:13:46.668 fused_ordering(511) 00:13:46.668 fused_ordering(512) 00:13:46.668 fused_ordering(513) 00:13:46.668 fused_ordering(514) 00:13:46.668 fused_ordering(515) 00:13:46.668 fused_ordering(516) 00:13:46.668 fused_ordering(517) 00:13:46.668 fused_ordering(518) 00:13:46.668 fused_ordering(519) 00:13:46.668 fused_ordering(520) 00:13:46.668 fused_ordering(521) 00:13:46.668 fused_ordering(522) 00:13:46.668 fused_ordering(523) 00:13:46.668 fused_ordering(524) 00:13:46.668 fused_ordering(525) 00:13:46.668 fused_ordering(526) 00:13:46.668 fused_ordering(527) 00:13:46.668 fused_ordering(528) 00:13:46.668 fused_ordering(529) 00:13:46.668 fused_ordering(530) 00:13:46.668 fused_ordering(531) 00:13:46.668 fused_ordering(532) 00:13:46.668 fused_ordering(533) 00:13:46.668 fused_ordering(534) 00:13:46.668 fused_ordering(535) 00:13:46.668 fused_ordering(536) 00:13:46.668 fused_ordering(537) 00:13:46.668 fused_ordering(538) 00:13:46.668 fused_ordering(539) 00:13:46.668 fused_ordering(540) 00:13:46.668 fused_ordering(541) 00:13:46.668 fused_ordering(542) 00:13:46.668 fused_ordering(543) 00:13:46.668 fused_ordering(544) 00:13:46.668 fused_ordering(545) 00:13:46.668 fused_ordering(546) 00:13:46.668 fused_ordering(547) 00:13:46.668 fused_ordering(548) 00:13:46.668 fused_ordering(549) 00:13:46.668 fused_ordering(550) 00:13:46.668 fused_ordering(551) 00:13:46.668 fused_ordering(552) 00:13:46.668 fused_ordering(553) 00:13:46.668 fused_ordering(554) 00:13:46.668 fused_ordering(555) 00:13:46.668 fused_ordering(556) 00:13:46.668 fused_ordering(557) 00:13:46.668 fused_ordering(558) 00:13:46.668 fused_ordering(559) 00:13:46.668 fused_ordering(560) 00:13:46.668 fused_ordering(561) 00:13:46.668 fused_ordering(562) 00:13:46.668 fused_ordering(563) 00:13:46.668 fused_ordering(564) 00:13:46.668 fused_ordering(565) 00:13:46.668 fused_ordering(566) 00:13:46.668 fused_ordering(567) 00:13:46.668 fused_ordering(568) 00:13:46.668 fused_ordering(569) 00:13:46.668 fused_ordering(570) 00:13:46.668 fused_ordering(571) 00:13:46.668 fused_ordering(572) 00:13:46.668 fused_ordering(573) 00:13:46.668 fused_ordering(574) 00:13:46.668 fused_ordering(575) 00:13:46.668 fused_ordering(576) 00:13:46.668 fused_ordering(577) 00:13:46.668 fused_ordering(578) 00:13:46.668 fused_ordering(579) 00:13:46.668 fused_ordering(580) 00:13:46.668 fused_ordering(581) 00:13:46.668 fused_ordering(582) 00:13:46.668 fused_ordering(583) 00:13:46.668 fused_ordering(584) 00:13:46.668 fused_ordering(585) 00:13:46.668 fused_ordering(586) 00:13:46.668 fused_ordering(587) 00:13:46.668 fused_ordering(588) 00:13:46.668 fused_ordering(589) 00:13:46.668 fused_ordering(590) 00:13:46.668 fused_ordering(591) 00:13:46.668 fused_ordering(592) 00:13:46.668 fused_ordering(593) 00:13:46.668 fused_ordering(594) 00:13:46.668 fused_ordering(595) 00:13:46.668 fused_ordering(596) 00:13:46.668 fused_ordering(597) 00:13:46.668 fused_ordering(598) 00:13:46.668 fused_ordering(599) 00:13:46.668 fused_ordering(600) 00:13:46.668 fused_ordering(601) 00:13:46.668 fused_ordering(602) 00:13:46.668 fused_ordering(603) 00:13:46.668 fused_ordering(604) 00:13:46.668 fused_ordering(605) 00:13:46.668 fused_ordering(606) 00:13:46.668 fused_ordering(607) 00:13:46.668 fused_ordering(608) 00:13:46.668 fused_ordering(609) 00:13:46.668 fused_ordering(610) 00:13:46.668 fused_ordering(611) 00:13:46.668 fused_ordering(612) 00:13:46.668 fused_ordering(613) 00:13:46.668 fused_ordering(614) 00:13:46.668 fused_ordering(615) 00:13:47.237 fused_ordering(616) 00:13:47.237 fused_ordering(617) 00:13:47.237 fused_ordering(618) 00:13:47.237 fused_ordering(619) 00:13:47.237 fused_ordering(620) 00:13:47.237 fused_ordering(621) 00:13:47.237 fused_ordering(622) 00:13:47.237 fused_ordering(623) 00:13:47.237 fused_ordering(624) 00:13:47.237 fused_ordering(625) 00:13:47.237 fused_ordering(626) 00:13:47.237 fused_ordering(627) 00:13:47.237 fused_ordering(628) 00:13:47.237 fused_ordering(629) 00:13:47.237 fused_ordering(630) 00:13:47.237 fused_ordering(631) 00:13:47.237 fused_ordering(632) 00:13:47.237 fused_ordering(633) 00:13:47.237 fused_ordering(634) 00:13:47.237 fused_ordering(635) 00:13:47.237 fused_ordering(636) 00:13:47.237 fused_ordering(637) 00:13:47.237 fused_ordering(638) 00:13:47.237 fused_ordering(639) 00:13:47.237 fused_ordering(640) 00:13:47.237 fused_ordering(641) 00:13:47.237 fused_ordering(642) 00:13:47.237 fused_ordering(643) 00:13:47.237 fused_ordering(644) 00:13:47.237 fused_ordering(645) 00:13:47.237 fused_ordering(646) 00:13:47.237 fused_ordering(647) 00:13:47.237 fused_ordering(648) 00:13:47.237 fused_ordering(649) 00:13:47.237 fused_ordering(650) 00:13:47.237 fused_ordering(651) 00:13:47.237 fused_ordering(652) 00:13:47.237 fused_ordering(653) 00:13:47.237 fused_ordering(654) 00:13:47.237 fused_ordering(655) 00:13:47.237 fused_ordering(656) 00:13:47.237 fused_ordering(657) 00:13:47.237 fused_ordering(658) 00:13:47.237 fused_ordering(659) 00:13:47.237 fused_ordering(660) 00:13:47.237 fused_ordering(661) 00:13:47.237 fused_ordering(662) 00:13:47.237 fused_ordering(663) 00:13:47.237 fused_ordering(664) 00:13:47.237 fused_ordering(665) 00:13:47.237 fused_ordering(666) 00:13:47.237 fused_ordering(667) 00:13:47.237 fused_ordering(668) 00:13:47.237 fused_ordering(669) 00:13:47.237 fused_ordering(670) 00:13:47.237 fused_ordering(671) 00:13:47.237 fused_ordering(672) 00:13:47.237 fused_ordering(673) 00:13:47.237 fused_ordering(674) 00:13:47.237 fused_ordering(675) 00:13:47.237 fused_ordering(676) 00:13:47.237 fused_ordering(677) 00:13:47.237 fused_ordering(678) 00:13:47.237 fused_ordering(679) 00:13:47.237 fused_ordering(680) 00:13:47.237 fused_ordering(681) 00:13:47.237 fused_ordering(682) 00:13:47.237 fused_ordering(683) 00:13:47.237 fused_ordering(684) 00:13:47.237 fused_ordering(685) 00:13:47.237 fused_ordering(686) 00:13:47.237 fused_ordering(687) 00:13:47.237 fused_ordering(688) 00:13:47.237 fused_ordering(689) 00:13:47.237 fused_ordering(690) 00:13:47.237 fused_ordering(691) 00:13:47.237 fused_ordering(692) 00:13:47.237 fused_ordering(693) 00:13:47.237 fused_ordering(694) 00:13:47.237 fused_ordering(695) 00:13:47.237 fused_ordering(696) 00:13:47.237 fused_ordering(697) 00:13:47.237 fused_ordering(698) 00:13:47.237 fused_ordering(699) 00:13:47.237 fused_ordering(700) 00:13:47.237 fused_ordering(701) 00:13:47.237 fused_ordering(702) 00:13:47.237 fused_ordering(703) 00:13:47.237 fused_ordering(704) 00:13:47.237 fused_ordering(705) 00:13:47.237 fused_ordering(706) 00:13:47.237 fused_ordering(707) 00:13:47.237 fused_ordering(708) 00:13:47.237 fused_ordering(709) 00:13:47.237 fused_ordering(710) 00:13:47.237 fused_ordering(711) 00:13:47.237 fused_ordering(712) 00:13:47.237 fused_ordering(713) 00:13:47.237 fused_ordering(714) 00:13:47.237 fused_ordering(715) 00:13:47.237 fused_ordering(716) 00:13:47.237 fused_ordering(717) 00:13:47.237 fused_ordering(718) 00:13:47.237 fused_ordering(719) 00:13:47.237 fused_ordering(720) 00:13:47.237 fused_ordering(721) 00:13:47.237 fused_ordering(722) 00:13:47.237 fused_ordering(723) 00:13:47.237 fused_ordering(724) 00:13:47.237 fused_ordering(725) 00:13:47.237 fused_ordering(726) 00:13:47.237 fused_ordering(727) 00:13:47.237 fused_ordering(728) 00:13:47.237 fused_ordering(729) 00:13:47.237 fused_ordering(730) 00:13:47.237 fused_ordering(731) 00:13:47.237 fused_ordering(732) 00:13:47.237 fused_ordering(733) 00:13:47.237 fused_ordering(734) 00:13:47.237 fused_ordering(735) 00:13:47.237 fused_ordering(736) 00:13:47.237 fused_ordering(737) 00:13:47.237 fused_ordering(738) 00:13:47.237 fused_ordering(739) 00:13:47.237 fused_ordering(740) 00:13:47.237 fused_ordering(741) 00:13:47.237 fused_ordering(742) 00:13:47.237 fused_ordering(743) 00:13:47.237 fused_ordering(744) 00:13:47.237 fused_ordering(745) 00:13:47.237 fused_ordering(746) 00:13:47.237 fused_ordering(747) 00:13:47.237 fused_ordering(748) 00:13:47.237 fused_ordering(749) 00:13:47.237 fused_ordering(750) 00:13:47.237 fused_ordering(751) 00:13:47.237 fused_ordering(752) 00:13:47.237 fused_ordering(753) 00:13:47.237 fused_ordering(754) 00:13:47.237 fused_ordering(755) 00:13:47.237 fused_ordering(756) 00:13:47.237 fused_ordering(757) 00:13:47.237 fused_ordering(758) 00:13:47.237 fused_ordering(759) 00:13:47.237 fused_ordering(760) 00:13:47.237 fused_ordering(761) 00:13:47.237 fused_ordering(762) 00:13:47.237 fused_ordering(763) 00:13:47.237 fused_ordering(764) 00:13:47.237 fused_ordering(765) 00:13:47.237 fused_ordering(766) 00:13:47.237 fused_ordering(767) 00:13:47.237 fused_ordering(768) 00:13:47.237 fused_ordering(769) 00:13:47.237 fused_ordering(770) 00:13:47.237 fused_ordering(771) 00:13:47.237 fused_ordering(772) 00:13:47.238 fused_ordering(773) 00:13:47.238 fused_ordering(774) 00:13:47.238 fused_ordering(775) 00:13:47.238 fused_ordering(776) 00:13:47.238 fused_ordering(777) 00:13:47.238 fused_ordering(778) 00:13:47.238 fused_ordering(779) 00:13:47.238 fused_ordering(780) 00:13:47.238 fused_ordering(781) 00:13:47.238 fused_ordering(782) 00:13:47.238 fused_ordering(783) 00:13:47.238 fused_ordering(784) 00:13:47.238 fused_ordering(785) 00:13:47.238 fused_ordering(786) 00:13:47.238 fused_ordering(787) 00:13:47.238 fused_ordering(788) 00:13:47.238 fused_ordering(789) 00:13:47.238 fused_ordering(790) 00:13:47.238 fused_ordering(791) 00:13:47.238 fused_ordering(792) 00:13:47.238 fused_ordering(793) 00:13:47.238 fused_ordering(794) 00:13:47.238 fused_ordering(795) 00:13:47.238 fused_ordering(796) 00:13:47.238 fused_ordering(797) 00:13:47.238 fused_ordering(798) 00:13:47.238 fused_ordering(799) 00:13:47.238 fused_ordering(800) 00:13:47.238 fused_ordering(801) 00:13:47.238 fused_ordering(802) 00:13:47.238 fused_ordering(803) 00:13:47.238 fused_ordering(804) 00:13:47.238 fused_ordering(805) 00:13:47.238 fused_ordering(806) 00:13:47.238 fused_ordering(807) 00:13:47.238 fused_ordering(808) 00:13:47.238 fused_ordering(809) 00:13:47.238 fused_ordering(810) 00:13:47.238 fused_ordering(811) 00:13:47.238 fused_ordering(812) 00:13:47.238 fused_ordering(813) 00:13:47.238 fused_ordering(814) 00:13:47.238 fused_ordering(815) 00:13:47.238 fused_ordering(816) 00:13:47.238 fused_ordering(817) 00:13:47.238 fused_ordering(818) 00:13:47.238 fused_ordering(819) 00:13:47.238 fused_ordering(820) 00:13:48.224 fused_ordering(821) 00:13:48.224 fused_ordering(822) 00:13:48.224 fused_ordering(823) 00:13:48.224 fused_ordering(824) 00:13:48.224 fused_ordering(825) 00:13:48.224 fused_ordering(826) 00:13:48.224 fused_ordering(827) 00:13:48.224 fused_ordering(828) 00:13:48.224 fused_ordering(829) 00:13:48.224 fused_ordering(830) 00:13:48.224 fused_ordering(831) 00:13:48.224 fused_ordering(832) 00:13:48.224 fused_ordering(833) 00:13:48.224 fused_ordering(834) 00:13:48.224 fused_ordering(835) 00:13:48.224 fused_ordering(836) 00:13:48.224 fused_ordering(837) 00:13:48.224 fused_ordering(838) 00:13:48.224 fused_ordering(839) 00:13:48.224 fused_ordering(840) 00:13:48.224 fused_ordering(841) 00:13:48.224 fused_ordering(842) 00:13:48.224 fused_ordering(843) 00:13:48.224 fused_ordering(844) 00:13:48.224 fused_ordering(845) 00:13:48.224 fused_ordering(846) 00:13:48.224 fused_ordering(847) 00:13:48.224 fused_ordering(848) 00:13:48.224 fused_ordering(849) 00:13:48.224 fused_ordering(850) 00:13:48.224 fused_ordering(851) 00:13:48.224 fused_ordering(852) 00:13:48.224 fused_ordering(853) 00:13:48.224 fused_ordering(854) 00:13:48.224 fused_ordering(855) 00:13:48.224 fused_ordering(856) 00:13:48.224 fused_ordering(857) 00:13:48.224 fused_ordering(858) 00:13:48.224 fused_ordering(859) 00:13:48.224 fused_ordering(860) 00:13:48.224 fused_ordering(861) 00:13:48.224 fused_ordering(862) 00:13:48.224 fused_ordering(863) 00:13:48.224 fused_ordering(864) 00:13:48.224 fused_ordering(865) 00:13:48.224 fused_ordering(866) 00:13:48.224 fused_ordering(867) 00:13:48.224 fused_ordering(868) 00:13:48.224 fused_ordering(869) 00:13:48.224 fused_ordering(870) 00:13:48.224 fused_ordering(871) 00:13:48.224 fused_ordering(872) 00:13:48.224 fused_ordering(873) 00:13:48.224 fused_ordering(874) 00:13:48.224 fused_ordering(875) 00:13:48.224 fused_ordering(876) 00:13:48.224 fused_ordering(877) 00:13:48.224 fused_ordering(878) 00:13:48.224 fused_ordering(879) 00:13:48.224 fused_ordering(880) 00:13:48.224 fused_ordering(881) 00:13:48.224 fused_ordering(882) 00:13:48.224 fused_ordering(883) 00:13:48.224 fused_ordering(884) 00:13:48.224 fused_ordering(885) 00:13:48.224 fused_ordering(886) 00:13:48.224 fused_ordering(887) 00:13:48.224 fused_ordering(888) 00:13:48.224 fused_ordering(889) 00:13:48.224 fused_ordering(890) 00:13:48.224 fused_ordering(891) 00:13:48.224 fused_ordering(892) 00:13:48.224 fused_ordering(893) 00:13:48.224 fused_ordering(894) 00:13:48.224 fused_ordering(895) 00:13:48.224 fused_ordering(896) 00:13:48.224 fused_ordering(897) 00:13:48.224 fused_ordering(898) 00:13:48.224 fused_ordering(899) 00:13:48.224 fused_ordering(900) 00:13:48.224 fused_ordering(901) 00:13:48.224 fused_ordering(902) 00:13:48.224 fused_ordering(903) 00:13:48.224 fused_ordering(904) 00:13:48.224 fused_ordering(905) 00:13:48.224 fused_ordering(906) 00:13:48.224 fused_ordering(907) 00:13:48.224 fused_ordering(908) 00:13:48.224 fused_ordering(909) 00:13:48.224 fused_ordering(910) 00:13:48.224 fused_ordering(911) 00:13:48.224 fused_ordering(912) 00:13:48.224 fused_ordering(913) 00:13:48.224 fused_ordering(914) 00:13:48.224 fused_ordering(915) 00:13:48.224 fused_ordering(916) 00:13:48.224 fused_ordering(917) 00:13:48.224 fused_ordering(918) 00:13:48.224 fused_ordering(919) 00:13:48.224 fused_ordering(920) 00:13:48.224 fused_ordering(921) 00:13:48.224 fused_ordering(922) 00:13:48.224 fused_ordering(923) 00:13:48.224 fused_ordering(924) 00:13:48.224 fused_ordering(925) 00:13:48.224 fused_ordering(926) 00:13:48.224 fused_ordering(927) 00:13:48.224 fused_ordering(928) 00:13:48.224 fused_ordering(929) 00:13:48.224 fused_ordering(930) 00:13:48.224 fused_ordering(931) 00:13:48.224 fused_ordering(932) 00:13:48.224 fused_ordering(933) 00:13:48.224 fused_ordering(934) 00:13:48.224 fused_ordering(935) 00:13:48.224 fused_ordering(936) 00:13:48.224 fused_ordering(937) 00:13:48.224 fused_ordering(938) 00:13:48.224 fused_ordering(939) 00:13:48.224 fused_ordering(940) 00:13:48.224 fused_ordering(941) 00:13:48.224 fused_ordering(942) 00:13:48.224 fused_ordering(943) 00:13:48.224 fused_ordering(944) 00:13:48.224 fused_ordering(945) 00:13:48.224 fused_ordering(946) 00:13:48.224 fused_ordering(947) 00:13:48.224 fused_ordering(948) 00:13:48.224 fused_ordering(949) 00:13:48.224 fused_ordering(950) 00:13:48.224 fused_ordering(951) 00:13:48.224 fused_ordering(952) 00:13:48.224 fused_ordering(953) 00:13:48.224 fused_ordering(954) 00:13:48.224 fused_ordering(955) 00:13:48.224 fused_ordering(956) 00:13:48.224 fused_ordering(957) 00:13:48.224 fused_ordering(958) 00:13:48.224 fused_ordering(959) 00:13:48.224 fused_ordering(960) 00:13:48.224 fused_ordering(961) 00:13:48.224 fused_ordering(962) 00:13:48.224 fused_ordering(963) 00:13:48.224 fused_ordering(964) 00:13:48.224 fused_ordering(965) 00:13:48.224 fused_ordering(966) 00:13:48.224 fused_ordering(967) 00:13:48.224 fused_ordering(968) 00:13:48.224 fused_ordering(969) 00:13:48.224 fused_ordering(970) 00:13:48.224 fused_ordering(971) 00:13:48.224 fused_ordering(972) 00:13:48.224 fused_ordering(973) 00:13:48.224 fused_ordering(974) 00:13:48.224 fused_ordering(975) 00:13:48.224 fused_ordering(976) 00:13:48.224 fused_ordering(977) 00:13:48.224 fused_ordering(978) 00:13:48.224 fused_ordering(979) 00:13:48.224 fused_ordering(980) 00:13:48.224 fused_ordering(981) 00:13:48.224 fused_ordering(982) 00:13:48.224 fused_ordering(983) 00:13:48.224 fused_ordering(984) 00:13:48.224 fused_ordering(985) 00:13:48.224 fused_ordering(986) 00:13:48.224 fused_ordering(987) 00:13:48.224 fused_ordering(988) 00:13:48.224 fused_ordering(989) 00:13:48.224 fused_ordering(990) 00:13:48.224 fused_ordering(991) 00:13:48.224 fused_ordering(992) 00:13:48.224 fused_ordering(993) 00:13:48.224 fused_ordering(994) 00:13:48.224 fused_ordering(995) 00:13:48.224 fused_ordering(996) 00:13:48.224 fused_ordering(997) 00:13:48.224 fused_ordering(998) 00:13:48.224 fused_ordering(999) 00:13:48.224 fused_ordering(1000) 00:13:48.224 fused_ordering(1001) 00:13:48.224 fused_ordering(1002) 00:13:48.224 fused_ordering(1003) 00:13:48.224 fused_ordering(1004) 00:13:48.224 fused_ordering(1005) 00:13:48.224 fused_ordering(1006) 00:13:48.224 fused_ordering(1007) 00:13:48.224 fused_ordering(1008) 00:13:48.224 fused_ordering(1009) 00:13:48.224 fused_ordering(1010) 00:13:48.224 fused_ordering(1011) 00:13:48.224 fused_ordering(1012) 00:13:48.224 fused_ordering(1013) 00:13:48.224 fused_ordering(1014) 00:13:48.224 fused_ordering(1015) 00:13:48.224 fused_ordering(1016) 00:13:48.224 fused_ordering(1017) 00:13:48.224 fused_ordering(1018) 00:13:48.224 fused_ordering(1019) 00:13:48.224 fused_ordering(1020) 00:13:48.224 fused_ordering(1021) 00:13:48.224 fused_ordering(1022) 00:13:48.224 fused_ordering(1023) 00:13:48.224 17:39:09 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:48.224 17:39:09 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:48.224 17:39:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:48.224 17:39:09 -- nvmf/common.sh@116 -- # sync 00:13:48.224 17:39:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:48.224 17:39:09 -- nvmf/common.sh@119 -- # set +e 00:13:48.225 17:39:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:48.225 17:39:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:48.225 rmmod nvme_tcp 00:13:48.225 rmmod nvme_fabrics 00:13:48.225 rmmod nvme_keyring 00:13:48.225 17:39:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:48.225 17:39:09 -- nvmf/common.sh@123 -- # set -e 00:13:48.225 17:39:09 -- nvmf/common.sh@124 -- # return 0 00:13:48.225 17:39:09 -- nvmf/common.sh@477 -- # '[' -n 553154 ']' 00:13:48.225 17:39:09 -- nvmf/common.sh@478 -- # killprocess 553154 00:13:48.225 17:39:09 -- common/autotest_common.sh@926 -- # '[' -z 553154 ']' 00:13:48.225 17:39:09 -- common/autotest_common.sh@930 -- # kill -0 553154 00:13:48.225 17:39:09 -- common/autotest_common.sh@931 -- # uname 00:13:48.225 17:39:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.225 17:39:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 553154 00:13:48.225 17:39:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:48.225 17:39:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:48.225 17:39:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 553154' 00:13:48.225 killing process with pid 553154 00:13:48.225 17:39:09 -- common/autotest_common.sh@945 -- # kill 553154 00:13:48.225 17:39:09 -- common/autotest_common.sh@950 -- # wait 553154 00:13:48.484 17:39:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:48.484 17:39:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:48.484 17:39:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:48.484 17:39:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.484 17:39:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:48.484 17:39:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.484 17:39:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.484 17:39:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.022 17:39:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:51.022 00:13:51.022 real 0m12.652s 00:13:51.022 user 0m8.086s 00:13:51.022 sys 0m6.921s 00:13:51.022 17:39:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.022 17:39:12 -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 ************************************ 00:13:51.022 END TEST nvmf_fused_ordering 00:13:51.022 ************************************ 00:13:51.022 17:39:12 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.022 17:39:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:51.022 17:39:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.022 17:39:12 -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 ************************************ 00:13:51.022 START TEST nvmf_delete_subsystem 00:13:51.022 ************************************ 00:13:51.022 17:39:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.022 * Looking for test storage... 00:13:51.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.022 17:39:12 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.022 17:39:12 -- nvmf/common.sh@7 -- # uname -s 00:13:51.022 17:39:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.022 17:39:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.022 17:39:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.022 17:39:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.022 17:39:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.022 17:39:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.022 17:39:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.022 17:39:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.022 17:39:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.022 17:39:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.022 17:39:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:51.022 17:39:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:51.022 17:39:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.022 17:39:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.022 17:39:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.022 17:39:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.022 17:39:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.022 17:39:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.022 17:39:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.023 17:39:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.023 17:39:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.023 17:39:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.023 17:39:12 -- paths/export.sh@5 -- # export PATH 00:13:51.023 17:39:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.023 17:39:12 -- nvmf/common.sh@46 -- # : 0 00:13:51.023 17:39:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:51.023 17:39:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:51.023 17:39:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:51.023 17:39:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.023 17:39:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.023 17:39:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:51.023 17:39:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:51.023 17:39:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:51.023 17:39:12 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:51.023 17:39:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:51.023 17:39:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.023 17:39:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:51.023 17:39:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:51.023 17:39:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:51.023 17:39:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.023 17:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.023 17:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.023 17:39:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:51.023 17:39:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:51.023 17:39:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:51.023 17:39:12 -- common/autotest_common.sh@10 -- # set +x 00:13:56.302 17:39:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:56.302 17:39:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:56.302 17:39:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:56.302 17:39:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:56.302 17:39:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:56.302 17:39:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:56.302 17:39:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:56.302 17:39:17 -- nvmf/common.sh@294 -- # net_devs=() 00:13:56.302 17:39:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:56.302 17:39:17 -- nvmf/common.sh@295 -- # e810=() 00:13:56.302 17:39:17 -- nvmf/common.sh@295 -- # local -ga e810 00:13:56.302 17:39:17 -- nvmf/common.sh@296 -- # x722=() 00:13:56.302 17:39:17 -- nvmf/common.sh@296 -- # local -ga x722 00:13:56.302 17:39:17 -- nvmf/common.sh@297 -- # mlx=() 00:13:56.302 17:39:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:56.302 17:39:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.302 17:39:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:56.302 17:39:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.302 17:39:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:56.302 17:39:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.302 17:39:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:56.302 17:39:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.302 17:39:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.302 17:39:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.302 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.302 17:39:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:56.302 17:39:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.302 17:39:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.302 17:39:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.302 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.302 17:39:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:56.302 17:39:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:56.302 17:39:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.302 17:39:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.302 17:39:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:56.302 17:39:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.302 17:39:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.302 17:39:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:56.302 17:39:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.302 17:39:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.302 17:39:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:56.302 17:39:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:56.302 17:39:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.302 17:39:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.302 17:39:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.302 17:39:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.302 17:39:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:56.302 17:39:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.302 17:39:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.302 17:39:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.302 17:39:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:56.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:13:56.302 00:13:56.302 --- 10.0.0.2 ping statistics --- 00:13:56.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.302 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:56.302 17:39:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:13:56.302 00:13:56.302 --- 10.0.0.1 ping statistics --- 00:13:56.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.302 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:13:56.302 17:39:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.302 17:39:17 -- nvmf/common.sh@410 -- # return 0 00:13:56.302 17:39:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.302 17:39:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.302 17:39:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.302 17:39:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.302 17:39:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.303 17:39:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.303 17:39:17 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:56.303 17:39:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.303 17:39:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:56.303 17:39:17 -- common/autotest_common.sh@10 -- # set +x 00:13:56.303 17:39:17 -- nvmf/common.sh@469 -- # nvmfpid=557807 00:13:56.303 17:39:17 -- nvmf/common.sh@470 -- # waitforlisten 557807 00:13:56.303 17:39:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:56.303 17:39:17 -- common/autotest_common.sh@819 -- # '[' -z 557807 ']' 00:13:56.303 17:39:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.303 17:39:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:56.303 17:39:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.303 17:39:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:56.303 17:39:17 -- common/autotest_common.sh@10 -- # set +x 00:13:56.303 [2024-07-24 17:39:17.543374] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:56.303 [2024-07-24 17:39:17.543422] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.303 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.303 [2024-07-24 17:39:17.601409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.303 [2024-07-24 17:39:17.677188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.303 [2024-07-24 17:39:17.677329] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.303 [2024-07-24 17:39:17.677336] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.303 [2024-07-24 17:39:17.677343] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.303 [2024-07-24 17:39:17.677376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.303 [2024-07-24 17:39:17.677380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.870 17:39:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:56.870 17:39:18 -- common/autotest_common.sh@852 -- # return 0 00:13:56.870 17:39:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:56.870 17:39:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 17:39:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 [2024-07-24 17:39:18.383052] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 [2024-07-24 17:39:18.403235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 NULL1 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 Delay0 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.870 17:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.870 17:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 17:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@28 -- # perf_pid=558054 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:56.870 17:39:18 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:56.870 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.128 [2024-07-24 17:39:18.483994] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:59.035 17:39:20 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.035 17:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.035 17:39:20 -- common/autotest_common.sh@10 -- # set +x 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 [2024-07-24 17:39:20.694633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a828f0 is same with the state(5) to be set 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 starting I/O failed: -6 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Write completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.295 Read completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 starting I/O failed: -6 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 [2024-07-24 17:39:20.695319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f5400bf20 is same with the state(5) to be set 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Write completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:13:59.296 Read completed with error (sct=0, sc=8) 00:14:00.235 [2024-07-24 17:39:21.663934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a79910 is same with the state(5) to be set 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 [2024-07-24 17:39:21.697231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a82ba0 is same with the state(5) to be set 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 [2024-07-24 17:39:21.697364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f5400c1d0 is same with the state(5) to be set 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Read completed with error (sct=0, sc=8) 00:14:00.235 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 [2024-07-24 17:39:21.698340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ae40 is same with the state(5) to be set 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 Write completed with error (sct=0, sc=8) 00:14:00.236 Read completed with error (sct=0, sc=8) 00:14:00.236 [2024-07-24 17:39:21.698495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a82640 is same with the state(5) to be set 00:14:00.236 [2024-07-24 17:39:21.699162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a79910 (9): Bad file descriptor 00:14:00.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:00.236 17:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.236 17:39:21 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:00.236 17:39:21 -- target/delete_subsystem.sh@35 -- # kill -0 558054 00:14:00.236 17:39:21 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:00.236 Initializing NVMe Controllers 00:14:00.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:00.236 Controller IO queue size 128, less than required. 00:14:00.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:00.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:00.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:00.236 Initialization complete. Launching workers. 00:14:00.236 ======================================================== 00:14:00.236 Latency(us) 00:14:00.236 Device Information : IOPS MiB/s Average min max 00:14:00.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.20 0.09 947559.45 572.12 1010684.88 00:14:00.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.94 0.08 875137.49 228.55 1013250.02 00:14:00.236 ======================================================== 00:14:00.236 Total : 347.14 0.17 915026.55 228.55 1013250.02 00:14:00.236 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@35 -- # kill -0 558054 00:14:00.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (558054) - No such process 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@45 -- # NOT wait 558054 00:14:00.804 17:39:22 -- common/autotest_common.sh@640 -- # local es=0 00:14:00.804 17:39:22 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 558054 00:14:00.804 17:39:22 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:00.804 17:39:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.804 17:39:22 -- common/autotest_common.sh@632 -- # type -t wait 00:14:00.804 17:39:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.804 17:39:22 -- common/autotest_common.sh@643 -- # wait 558054 00:14:00.804 17:39:22 -- common/autotest_common.sh@643 -- # es=1 00:14:00.804 17:39:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:00.804 17:39:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:00.804 17:39:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.804 17:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.804 17:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:00.804 17:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.804 17:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.804 17:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:00.804 [2024-07-24 17:39:22.226782] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.804 17:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.804 17:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.804 17:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:00.804 17:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@54 -- # perf_pid=558600 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:00.804 17:39:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.804 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.804 [2024-07-24 17:39:22.285948] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:01.372 17:39:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.372 17:39:22 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:01.372 17:39:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.939 17:39:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.939 17:39:23 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:01.939 17:39:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.198 17:39:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.198 17:39:23 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:02.198 17:39:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.765 17:39:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.765 17:39:24 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:02.765 17:39:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.334 17:39:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.334 17:39:24 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:03.334 17:39:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.903 17:39:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.903 17:39:25 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:03.903 17:39:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.903 Initializing NVMe Controllers 00:14:03.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.903 Controller IO queue size 128, less than required. 00:14:03.903 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:03.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:03.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:03.903 Initialization complete. Launching workers. 00:14:03.903 ======================================================== 00:14:03.903 Latency(us) 00:14:03.903 Device Information : IOPS MiB/s Average min max 00:14:03.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004994.49 1000417.82 1020439.85 00:14:03.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006504.07 1000398.39 1013884.27 00:14:03.903 ======================================================== 00:14:03.903 Total : 256.00 0.12 1005749.28 1000398.39 1020439.85 00:14:03.903 00:14:04.471 17:39:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.471 17:39:25 -- target/delete_subsystem.sh@57 -- # kill -0 558600 00:14:04.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (558600) - No such process 00:14:04.472 17:39:25 -- target/delete_subsystem.sh@67 -- # wait 558600 00:14:04.472 17:39:25 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:04.472 17:39:25 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:04.472 17:39:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:04.472 17:39:25 -- nvmf/common.sh@116 -- # sync 00:14:04.472 17:39:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:04.472 17:39:25 -- nvmf/common.sh@119 -- # set +e 00:14:04.472 17:39:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:04.472 17:39:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:04.472 rmmod nvme_tcp 00:14:04.472 rmmod nvme_fabrics 00:14:04.472 rmmod nvme_keyring 00:14:04.472 17:39:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:04.472 17:39:25 -- nvmf/common.sh@123 -- # set -e 00:14:04.472 17:39:25 -- nvmf/common.sh@124 -- # return 0 00:14:04.472 17:39:25 -- nvmf/common.sh@477 -- # '[' -n 557807 ']' 00:14:04.472 17:39:25 -- nvmf/common.sh@478 -- # killprocess 557807 00:14:04.472 17:39:25 -- common/autotest_common.sh@926 -- # '[' -z 557807 ']' 00:14:04.472 17:39:25 -- common/autotest_common.sh@930 -- # kill -0 557807 00:14:04.472 17:39:25 -- common/autotest_common.sh@931 -- # uname 00:14:04.472 17:39:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:04.472 17:39:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 557807 00:14:04.472 17:39:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:04.472 17:39:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:04.472 17:39:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 557807' 00:14:04.472 killing process with pid 557807 00:14:04.472 17:39:25 -- common/autotest_common.sh@945 -- # kill 557807 00:14:04.472 17:39:25 -- common/autotest_common.sh@950 -- # wait 557807 00:14:04.472 17:39:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:04.472 17:39:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:04.472 17:39:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:04.472 17:39:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.472 17:39:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:04.472 17:39:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.472 17:39:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.472 17:39:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.012 17:39:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:07.012 00:14:07.012 real 0m16.018s 00:14:07.012 user 0m30.561s 00:14:07.012 sys 0m4.698s 00:14:07.012 17:39:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.012 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:14:07.012 ************************************ 00:14:07.012 END TEST nvmf_delete_subsystem 00:14:07.012 ************************************ 00:14:07.012 17:39:28 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:07.012 17:39:28 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:07.012 17:39:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:07.012 17:39:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.013 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:14:07.013 ************************************ 00:14:07.013 START TEST nvmf_nvme_cli 00:14:07.013 ************************************ 00:14:07.013 17:39:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:07.013 * Looking for test storage... 00:14:07.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.013 17:39:28 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.013 17:39:28 -- nvmf/common.sh@7 -- # uname -s 00:14:07.013 17:39:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.013 17:39:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.013 17:39:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.013 17:39:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.013 17:39:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.013 17:39:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.013 17:39:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.013 17:39:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.013 17:39:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.013 17:39:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.013 17:39:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.013 17:39:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.013 17:39:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.013 17:39:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.013 17:39:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.013 17:39:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.013 17:39:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.013 17:39:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.013 17:39:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.013 17:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.013 17:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.013 17:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.013 17:39:28 -- paths/export.sh@5 -- # export PATH 00:14:07.013 17:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.013 17:39:28 -- nvmf/common.sh@46 -- # : 0 00:14:07.013 17:39:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:07.013 17:39:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:07.013 17:39:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:07.013 17:39:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.013 17:39:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.013 17:39:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:07.013 17:39:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:07.013 17:39:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:07.013 17:39:28 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.013 17:39:28 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.013 17:39:28 -- target/nvme_cli.sh@14 -- # devs=() 00:14:07.013 17:39:28 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:07.013 17:39:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:07.013 17:39:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.013 17:39:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:07.013 17:39:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:07.013 17:39:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:07.013 17:39:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.013 17:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.013 17:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.013 17:39:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:07.013 17:39:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:07.013 17:39:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:07.013 17:39:28 -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 17:39:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:12.295 17:39:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:12.295 17:39:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:12.295 17:39:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:12.295 17:39:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:12.295 17:39:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:12.295 17:39:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:12.295 17:39:33 -- nvmf/common.sh@294 -- # net_devs=() 00:14:12.295 17:39:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:12.295 17:39:33 -- nvmf/common.sh@295 -- # e810=() 00:14:12.295 17:39:33 -- nvmf/common.sh@295 -- # local -ga e810 00:14:12.295 17:39:33 -- nvmf/common.sh@296 -- # x722=() 00:14:12.295 17:39:33 -- nvmf/common.sh@296 -- # local -ga x722 00:14:12.295 17:39:33 -- nvmf/common.sh@297 -- # mlx=() 00:14:12.295 17:39:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:12.295 17:39:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.295 17:39:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:12.295 17:39:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:12.295 17:39:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:12.295 17:39:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:12.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:12.295 17:39:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:12.295 17:39:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:12.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:12.295 17:39:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:12.295 17:39:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.295 17:39:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.295 17:39:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:12.295 Found net devices under 0000:86:00.0: cvl_0_0 00:14:12.295 17:39:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.295 17:39:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:12.295 17:39:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.295 17:39:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.295 17:39:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:12.295 Found net devices under 0000:86:00.1: cvl_0_1 00:14:12.295 17:39:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.295 17:39:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:12.295 17:39:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:12.295 17:39:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:12.295 17:39:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.295 17:39:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.295 17:39:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.295 17:39:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:12.295 17:39:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.295 17:39:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.295 17:39:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:12.296 17:39:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.296 17:39:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.296 17:39:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:12.296 17:39:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:12.296 17:39:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.296 17:39:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.296 17:39:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.296 17:39:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.296 17:39:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:12.296 17:39:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.296 17:39:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.296 17:39:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.296 17:39:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:12.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:12.296 00:14:12.296 --- 10.0.0.2 ping statistics --- 00:14:12.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.296 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:12.296 17:39:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:14:12.296 00:14:12.296 --- 10.0.0.1 ping statistics --- 00:14:12.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.296 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:14:12.296 17:39:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.296 17:39:33 -- nvmf/common.sh@410 -- # return 0 00:14:12.296 17:39:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:12.296 17:39:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.296 17:39:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:12.296 17:39:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:12.296 17:39:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.296 17:39:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:12.296 17:39:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:12.296 17:39:33 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:12.296 17:39:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:12.296 17:39:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:12.296 17:39:33 -- common/autotest_common.sh@10 -- # set +x 00:14:12.296 17:39:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.296 17:39:33 -- nvmf/common.sh@469 -- # nvmfpid=562695 00:14:12.296 17:39:33 -- nvmf/common.sh@470 -- # waitforlisten 562695 00:14:12.296 17:39:33 -- common/autotest_common.sh@819 -- # '[' -z 562695 ']' 00:14:12.296 17:39:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.296 17:39:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:12.296 17:39:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.296 17:39:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:12.296 17:39:33 -- common/autotest_common.sh@10 -- # set +x 00:14:12.296 [2024-07-24 17:39:33.650112] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:12.296 [2024-07-24 17:39:33.650155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.296 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.296 [2024-07-24 17:39:33.708116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.296 [2024-07-24 17:39:33.787744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.296 [2024-07-24 17:39:33.787855] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.296 [2024-07-24 17:39:33.787863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.296 [2024-07-24 17:39:33.787868] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.296 [2024-07-24 17:39:33.787914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.296 [2024-07-24 17:39:33.788007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.296 [2024-07-24 17:39:33.788094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.296 [2024-07-24 17:39:33.788096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.921 17:39:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.921 17:39:34 -- common/autotest_common.sh@852 -- # return 0 00:14:12.921 17:39:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.921 17:39:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:12.921 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 17:39:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.185 17:39:34 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 [2024-07-24 17:39:34.517442] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 Malloc0 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 Malloc1 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 [2024-07-24 17:39:34.598655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:13.185 17:39:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.185 17:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:13.185 17:39:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.185 17:39:34 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:13.185 00:14:13.185 Discovery Log Number of Records 2, Generation counter 2 00:14:13.185 =====Discovery Log Entry 0====== 00:14:13.185 trtype: tcp 00:14:13.185 adrfam: ipv4 00:14:13.185 subtype: current discovery subsystem 00:14:13.185 treq: not required 00:14:13.185 portid: 0 00:14:13.185 trsvcid: 4420 00:14:13.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:13.185 traddr: 10.0.0.2 00:14:13.185 eflags: explicit discovery connections, duplicate discovery information 00:14:13.185 sectype: none 00:14:13.185 =====Discovery Log Entry 1====== 00:14:13.185 trtype: tcp 00:14:13.185 adrfam: ipv4 00:14:13.185 subtype: nvme subsystem 00:14:13.185 treq: not required 00:14:13.185 portid: 0 00:14:13.185 trsvcid: 4420 00:14:13.185 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:13.185 traddr: 10.0.0.2 00:14:13.185 eflags: none 00:14:13.185 sectype: none 00:14:13.185 17:39:34 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:13.185 17:39:34 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:13.185 17:39:34 -- nvmf/common.sh@510 -- # local dev _ 00:14:13.185 17:39:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:13.185 17:39:34 -- nvmf/common.sh@509 -- # nvme list 00:14:13.185 17:39:34 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:13.185 17:39:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:13.185 17:39:34 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.185 17:39:34 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:13.185 17:39:34 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:13.185 17:39:34 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.567 17:39:35 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:14.567 17:39:35 -- common/autotest_common.sh@1177 -- # local i=0 00:14:14.568 17:39:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.568 17:39:35 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:14:14.568 17:39:35 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:14:14.568 17:39:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:16.477 17:39:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:16.477 17:39:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:16.477 17:39:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.477 17:39:37 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:14:16.477 17:39:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.477 17:39:37 -- common/autotest_common.sh@1187 -- # return 0 00:14:16.477 17:39:37 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:16.477 17:39:37 -- nvmf/common.sh@510 -- # local dev _ 00:14:16.477 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.477 17:39:37 -- nvmf/common.sh@509 -- # nvme list 00:14:16.477 17:39:37 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:16.477 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.477 17:39:37 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:16.477 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.477 17:39:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:16.477 17:39:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:16.477 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.477 17:39:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:16.477 17:39:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:16.477 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.477 17:39:37 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:16.477 /dev/nvme0n1 ]] 00:14:16.477 17:39:37 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:16.477 17:39:37 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:16.478 17:39:37 -- nvmf/common.sh@510 -- # local dev _ 00:14:16.478 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.478 17:39:37 -- nvmf/common.sh@509 -- # nvme list 00:14:16.478 17:39:37 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:16.478 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.478 17:39:37 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:16.478 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.478 17:39:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:16.478 17:39:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:16.478 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.478 17:39:37 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:16.478 17:39:37 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:16.478 17:39:37 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:16.478 17:39:37 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:16.478 17:39:37 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.478 17:39:38 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.478 17:39:38 -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.478 17:39:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:16.478 17:39:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.478 17:39:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:16.478 17:39:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.478 17:39:38 -- common/autotest_common.sh@1210 -- # return 0 00:14:16.478 17:39:38 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:16.478 17:39:38 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.478 17:39:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.478 17:39:38 -- common/autotest_common.sh@10 -- # set +x 00:14:16.478 17:39:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.478 17:39:38 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:16.478 17:39:38 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:16.478 17:39:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:16.478 17:39:38 -- nvmf/common.sh@116 -- # sync 00:14:16.478 17:39:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:16.478 17:39:38 -- nvmf/common.sh@119 -- # set +e 00:14:16.478 17:39:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:16.478 17:39:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:16.478 rmmod nvme_tcp 00:14:16.478 rmmod nvme_fabrics 00:14:16.738 rmmod nvme_keyring 00:14:16.738 17:39:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:16.738 17:39:38 -- nvmf/common.sh@123 -- # set -e 00:14:16.738 17:39:38 -- nvmf/common.sh@124 -- # return 0 00:14:16.738 17:39:38 -- nvmf/common.sh@477 -- # '[' -n 562695 ']' 00:14:16.738 17:39:38 -- nvmf/common.sh@478 -- # killprocess 562695 00:14:16.738 17:39:38 -- common/autotest_common.sh@926 -- # '[' -z 562695 ']' 00:14:16.738 17:39:38 -- common/autotest_common.sh@930 -- # kill -0 562695 00:14:16.738 17:39:38 -- common/autotest_common.sh@931 -- # uname 00:14:16.738 17:39:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.738 17:39:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 562695 00:14:16.738 17:39:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:16.738 17:39:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:16.738 17:39:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 562695' 00:14:16.738 killing process with pid 562695 00:14:16.738 17:39:38 -- common/autotest_common.sh@945 -- # kill 562695 00:14:16.738 17:39:38 -- common/autotest_common.sh@950 -- # wait 562695 00:14:16.999 17:39:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.999 17:39:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.999 17:39:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.999 17:39:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.999 17:39:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.999 17:39:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.999 17:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.999 17:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.910 17:39:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:18.910 00:14:18.910 real 0m12.305s 00:14:18.910 user 0m19.838s 00:14:18.910 sys 0m4.496s 00:14:18.910 17:39:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.910 17:39:40 -- common/autotest_common.sh@10 -- # set +x 00:14:18.910 ************************************ 00:14:18.910 END TEST nvmf_nvme_cli 00:14:18.910 ************************************ 00:14:18.910 17:39:40 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:18.910 17:39:40 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:18.910 17:39:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:18.910 17:39:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:18.910 17:39:40 -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 ************************************ 00:14:19.170 START TEST nvmf_host_management 00:14:19.170 ************************************ 00:14:19.170 17:39:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:19.170 * Looking for test storage... 00:14:19.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.170 17:39:40 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.170 17:39:40 -- nvmf/common.sh@7 -- # uname -s 00:14:19.170 17:39:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.170 17:39:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.170 17:39:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.170 17:39:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.170 17:39:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.170 17:39:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.170 17:39:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.170 17:39:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.170 17:39:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.170 17:39:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.170 17:39:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.170 17:39:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.170 17:39:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.170 17:39:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.170 17:39:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.170 17:39:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.170 17:39:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.170 17:39:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.170 17:39:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.170 17:39:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.170 17:39:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.171 17:39:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.171 17:39:40 -- paths/export.sh@5 -- # export PATH 00:14:19.171 17:39:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.171 17:39:40 -- nvmf/common.sh@46 -- # : 0 00:14:19.171 17:39:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.171 17:39:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.171 17:39:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.171 17:39:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.171 17:39:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.171 17:39:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.171 17:39:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.171 17:39:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.171 17:39:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.171 17:39:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.171 17:39:40 -- target/host_management.sh@104 -- # nvmftestinit 00:14:19.171 17:39:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:19.171 17:39:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.171 17:39:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:19.171 17:39:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:19.171 17:39:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:19.171 17:39:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.171 17:39:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.171 17:39:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.171 17:39:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:19.171 17:39:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:19.171 17:39:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:19.171 17:39:40 -- common/autotest_common.sh@10 -- # set +x 00:14:24.468 17:39:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.468 17:39:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:24.468 17:39:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:24.468 17:39:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:24.468 17:39:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:24.468 17:39:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:24.468 17:39:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:24.468 17:39:45 -- nvmf/common.sh@294 -- # net_devs=() 00:14:24.468 17:39:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:24.468 17:39:45 -- nvmf/common.sh@295 -- # e810=() 00:14:24.468 17:39:45 -- nvmf/common.sh@295 -- # local -ga e810 00:14:24.468 17:39:45 -- nvmf/common.sh@296 -- # x722=() 00:14:24.468 17:39:45 -- nvmf/common.sh@296 -- # local -ga x722 00:14:24.468 17:39:45 -- nvmf/common.sh@297 -- # mlx=() 00:14:24.468 17:39:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:24.468 17:39:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.468 17:39:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:24.468 17:39:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:24.468 17:39:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:24.468 17:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:24.468 17:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:24.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:24.468 17:39:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:24.468 17:39:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:24.468 17:39:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:24.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:24.468 17:39:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:24.469 17:39:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:24.469 17:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.469 17:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:24.469 17:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.469 17:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:24.469 Found net devices under 0000:86:00.0: cvl_0_0 00:14:24.469 17:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.469 17:39:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:24.469 17:39:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.469 17:39:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:24.469 17:39:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.469 17:39:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:24.469 Found net devices under 0000:86:00.1: cvl_0_1 00:14:24.469 17:39:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.469 17:39:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:24.469 17:39:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:24.469 17:39:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:24.469 17:39:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.469 17:39:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.469 17:39:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.469 17:39:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:24.469 17:39:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.469 17:39:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.469 17:39:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:24.469 17:39:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.469 17:39:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.469 17:39:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:24.469 17:39:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:24.469 17:39:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.469 17:39:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.469 17:39:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.469 17:39:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.469 17:39:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:24.469 17:39:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.469 17:39:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.469 17:39:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.469 17:39:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:24.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:14:24.469 00:14:24.469 --- 10.0.0.2 ping statistics --- 00:14:24.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.469 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:24.469 17:39:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:14:24.469 00:14:24.469 --- 10.0.0.1 ping statistics --- 00:14:24.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.469 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:14:24.469 17:39:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.469 17:39:45 -- nvmf/common.sh@410 -- # return 0 00:14:24.469 17:39:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:24.469 17:39:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.469 17:39:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:24.469 17:39:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.469 17:39:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:24.469 17:39:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:24.469 17:39:45 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:24.469 17:39:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:24.469 17:39:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:24.469 17:39:45 -- common/autotest_common.sh@10 -- # set +x 00:14:24.469 ************************************ 00:14:24.469 START TEST nvmf_host_management 00:14:24.469 ************************************ 00:14:24.469 17:39:45 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:24.469 17:39:45 -- target/host_management.sh@69 -- # starttarget 00:14:24.469 17:39:45 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:24.469 17:39:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:24.469 17:39:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:24.469 17:39:45 -- common/autotest_common.sh@10 -- # set +x 00:14:24.469 17:39:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:24.469 17:39:45 -- nvmf/common.sh@469 -- # nvmfpid=566835 00:14:24.469 17:39:45 -- nvmf/common.sh@470 -- # waitforlisten 566835 00:14:24.469 17:39:45 -- common/autotest_common.sh@819 -- # '[' -z 566835 ']' 00:14:24.469 17:39:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.469 17:39:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:24.469 17:39:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.469 17:39:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:24.469 17:39:45 -- common/autotest_common.sh@10 -- # set +x 00:14:24.469 [2024-07-24 17:39:45.834958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:24.469 [2024-07-24 17:39:45.834998] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.469 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.469 [2024-07-24 17:39:45.892850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.469 [2024-07-24 17:39:45.976526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:24.469 [2024-07-24 17:39:45.976631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.469 [2024-07-24 17:39:45.976639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.469 [2024-07-24 17:39:45.976645] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.469 [2024-07-24 17:39:45.976750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.469 [2024-07-24 17:39:45.976770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.469 [2024-07-24 17:39:45.976878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.469 [2024-07-24 17:39:45.976880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:25.408 17:39:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:25.408 17:39:46 -- common/autotest_common.sh@852 -- # return 0 00:14:25.408 17:39:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:25.408 17:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 17:39:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.408 17:39:46 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.408 17:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 [2024-07-24 17:39:46.693299] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.408 17:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.408 17:39:46 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:25.408 17:39:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 17:39:46 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:25.408 17:39:46 -- target/host_management.sh@23 -- # cat 00:14:25.408 17:39:46 -- target/host_management.sh@30 -- # rpc_cmd 00:14:25.408 17:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 Malloc0 00:14:25.408 [2024-07-24 17:39:46.753098] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.408 17:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.408 17:39:46 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:25.408 17:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 17:39:46 -- target/host_management.sh@73 -- # perfpid=567103 00:14:25.408 17:39:46 -- target/host_management.sh@74 -- # waitforlisten 567103 /var/tmp/bdevperf.sock 00:14:25.408 17:39:46 -- common/autotest_common.sh@819 -- # '[' -z 567103 ']' 00:14:25.408 17:39:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.408 17:39:46 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:25.408 17:39:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:25.408 17:39:46 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:25.408 17:39:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.408 17:39:46 -- nvmf/common.sh@520 -- # config=() 00:14:25.408 17:39:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:25.408 17:39:46 -- nvmf/common.sh@520 -- # local subsystem config 00:14:25.408 17:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 17:39:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:25.408 17:39:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:25.408 { 00:14:25.408 "params": { 00:14:25.408 "name": "Nvme$subsystem", 00:14:25.408 "trtype": "$TEST_TRANSPORT", 00:14:25.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:25.408 "adrfam": "ipv4", 00:14:25.408 "trsvcid": "$NVMF_PORT", 00:14:25.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:25.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:25.408 "hdgst": ${hdgst:-false}, 00:14:25.408 "ddgst": ${ddgst:-false} 00:14:25.408 }, 00:14:25.408 "method": "bdev_nvme_attach_controller" 00:14:25.408 } 00:14:25.408 EOF 00:14:25.408 )") 00:14:25.408 17:39:46 -- nvmf/common.sh@542 -- # cat 00:14:25.408 17:39:46 -- nvmf/common.sh@544 -- # jq . 00:14:25.408 17:39:46 -- nvmf/common.sh@545 -- # IFS=, 00:14:25.408 17:39:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:25.408 "params": { 00:14:25.408 "name": "Nvme0", 00:14:25.408 "trtype": "tcp", 00:14:25.408 "traddr": "10.0.0.2", 00:14:25.408 "adrfam": "ipv4", 00:14:25.408 "trsvcid": "4420", 00:14:25.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:25.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:25.408 "hdgst": false, 00:14:25.408 "ddgst": false 00:14:25.408 }, 00:14:25.408 "method": "bdev_nvme_attach_controller" 00:14:25.408 }' 00:14:25.408 [2024-07-24 17:39:46.843087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:25.408 [2024-07-24 17:39:46.843131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567103 ] 00:14:25.408 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.408 [2024-07-24 17:39:46.897499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.408 [2024-07-24 17:39:46.967894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.667 Running I/O for 10 seconds... 00:14:26.238 17:39:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:26.238 17:39:47 -- common/autotest_common.sh@852 -- # return 0 00:14:26.238 17:39:47 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:26.238 17:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.238 17:39:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 17:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.238 17:39:47 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.238 17:39:47 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:26.238 17:39:47 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:26.238 17:39:47 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:26.238 17:39:47 -- target/host_management.sh@52 -- # local ret=1 00:14:26.238 17:39:47 -- target/host_management.sh@53 -- # local i 00:14:26.238 17:39:47 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:26.238 17:39:47 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:26.238 17:39:47 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:26.238 17:39:47 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:26.238 17:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.238 17:39:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 17:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.238 17:39:47 -- target/host_management.sh@55 -- # read_io_count=956 00:14:26.238 17:39:47 -- target/host_management.sh@58 -- # '[' 956 -ge 100 ']' 00:14:26.238 17:39:47 -- target/host_management.sh@59 -- # ret=0 00:14:26.238 17:39:47 -- target/host_management.sh@60 -- # break 00:14:26.238 17:39:47 -- target/host_management.sh@64 -- # return 0 00:14:26.238 17:39:47 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:26.238 17:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.238 17:39:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 [2024-07-24 17:39:47.716421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.238 [2024-07-24 17:39:47.716828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ba40 is same with the state(5) to be set 00:14:26.239 [2024-07-24 17:39:47.717933] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:14:26.239 17:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.239 17:39:47 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:26.239 17:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.239 17:39:47 -- common/autotest_common.sh@10 -- # set +x 00:14:26.239 [2024-07-24 17:39:47.725036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.239 [2024-07-24 17:39:47.725060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.725070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.239 [2024-07-24 17:39:47.725077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.725084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.239 [2024-07-24 17:39:47.725091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.725099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.239 [2024-07-24 17:39:47.725105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.725112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d75900 is same with the state(5) to be set 00:14:26.239 17:39:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.239 17:39:47 -- target/host_management.sh@87 -- # sleep 1 00:14:26.239 [2024-07-24 17:39:47.735051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d75900 (9): Bad file descriptor 00:14:26.239 [2024-07-24 17:39:47.745107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.239 [2024-07-24 17:39:47.745531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.239 [2024-07-24 17:39:47.745537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.745988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.745994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.746002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.746009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.746023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.746031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.746037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.746049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.240 [2024-07-24 17:39:47.746056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.240 [2024-07-24 17:39:47.746064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73170 is same with the state(5) to be set 00:14:26.240 task offset: 9984 on job bdev=Nvme0n1 fails 00:14:26.240 00:14:26.240 Latency(us) 00:14:26.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.240 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:26.240 Job: Nvme0n1 ended in about 0.51 seconds with error 00:14:26.240 Verification LBA range: start 0x0 length 0x400 00:14:26.240 Nvme0n1 : 0.51 2132.56 133.29 126.14 0.00 28001.88 5983.72 54708.31 00:14:26.240 =================================================================================================================== 00:14:26.240 Total : 2132.56 133.29 126.14 0.00 28001.88 5983.72 54708.31 00:14:26.241 [2024-07-24 17:39:47.748572] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:26.241 [2024-07-24 17:39:47.748585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:26.241 [2024-07-24 17:39:47.801587] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:27.190 17:39:48 -- target/host_management.sh@91 -- # kill -9 567103 00:14:27.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (567103) - No such process 00:14:27.190 17:39:48 -- target/host_management.sh@91 -- # true 00:14:27.190 17:39:48 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:27.190 17:39:48 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:27.190 17:39:48 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:27.190 17:39:48 -- nvmf/common.sh@520 -- # config=() 00:14:27.190 17:39:48 -- nvmf/common.sh@520 -- # local subsystem config 00:14:27.190 17:39:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:27.190 17:39:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:27.190 { 00:14:27.190 "params": { 00:14:27.190 "name": "Nvme$subsystem", 00:14:27.190 "trtype": "$TEST_TRANSPORT", 00:14:27.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:27.190 "adrfam": "ipv4", 00:14:27.190 "trsvcid": "$NVMF_PORT", 00:14:27.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:27.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:27.190 "hdgst": ${hdgst:-false}, 00:14:27.190 "ddgst": ${ddgst:-false} 00:14:27.190 }, 00:14:27.190 "method": "bdev_nvme_attach_controller" 00:14:27.190 } 00:14:27.190 EOF 00:14:27.190 )") 00:14:27.190 17:39:48 -- nvmf/common.sh@542 -- # cat 00:14:27.190 17:39:48 -- nvmf/common.sh@544 -- # jq . 00:14:27.190 17:39:48 -- nvmf/common.sh@545 -- # IFS=, 00:14:27.190 17:39:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:27.190 "params": { 00:14:27.190 "name": "Nvme0", 00:14:27.190 "trtype": "tcp", 00:14:27.190 "traddr": "10.0.0.2", 00:14:27.190 "adrfam": "ipv4", 00:14:27.190 "trsvcid": "4420", 00:14:27.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:27.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:27.190 "hdgst": false, 00:14:27.190 "ddgst": false 00:14:27.190 }, 00:14:27.190 "method": "bdev_nvme_attach_controller" 00:14:27.190 }' 00:14:27.191 [2024-07-24 17:39:48.784532] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:27.191 [2024-07-24 17:39:48.784582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567364 ] 00:14:27.451 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.451 [2024-07-24 17:39:48.839500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.451 [2024-07-24 17:39:48.908291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.712 Running I/O for 1 seconds... 00:14:29.094 00:14:29.094 Latency(us) 00:14:29.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.094 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:29.094 Verification LBA range: start 0x0 length 0x400 00:14:29.094 Nvme0n1 : 1.02 1846.53 115.41 0.00 0.00 34304.32 3419.27 51289.04 00:14:29.094 =================================================================================================================== 00:14:29.094 Total : 1846.53 115.41 0.00 0.00 34304.32 3419.27 51289.04 00:14:29.094 17:39:50 -- target/host_management.sh@101 -- # stoptarget 00:14:29.094 17:39:50 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:29.094 17:39:50 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:29.094 17:39:50 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:29.094 17:39:50 -- target/host_management.sh@40 -- # nvmftestfini 00:14:29.094 17:39:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:29.094 17:39:50 -- nvmf/common.sh@116 -- # sync 00:14:29.094 17:39:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:29.094 17:39:50 -- nvmf/common.sh@119 -- # set +e 00:14:29.094 17:39:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:29.094 17:39:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:29.094 rmmod nvme_tcp 00:14:29.094 rmmod nvme_fabrics 00:14:29.094 rmmod nvme_keyring 00:14:29.094 17:39:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:29.094 17:39:50 -- nvmf/common.sh@123 -- # set -e 00:14:29.094 17:39:50 -- nvmf/common.sh@124 -- # return 0 00:14:29.094 17:39:50 -- nvmf/common.sh@477 -- # '[' -n 566835 ']' 00:14:29.094 17:39:50 -- nvmf/common.sh@478 -- # killprocess 566835 00:14:29.094 17:39:50 -- common/autotest_common.sh@926 -- # '[' -z 566835 ']' 00:14:29.094 17:39:50 -- common/autotest_common.sh@930 -- # kill -0 566835 00:14:29.094 17:39:50 -- common/autotest_common.sh@931 -- # uname 00:14:29.094 17:39:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:29.094 17:39:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 566835 00:14:29.094 17:39:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:29.094 17:39:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:29.094 17:39:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 566835' 00:14:29.094 killing process with pid 566835 00:14:29.094 17:39:50 -- common/autotest_common.sh@945 -- # kill 566835 00:14:29.094 17:39:50 -- common/autotest_common.sh@950 -- # wait 566835 00:14:29.354 [2024-07-24 17:39:50.785473] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:29.354 17:39:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:29.354 17:39:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:29.354 17:39:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:29.354 17:39:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.354 17:39:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:29.354 17:39:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.354 17:39:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.354 17:39:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.265 17:39:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:31.526 00:14:31.526 real 0m7.068s 00:14:31.526 user 0m21.803s 00:14:31.526 sys 0m1.179s 00:14:31.526 17:39:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.526 17:39:52 -- common/autotest_common.sh@10 -- # set +x 00:14:31.526 ************************************ 00:14:31.526 END TEST nvmf_host_management 00:14:31.526 ************************************ 00:14:31.526 17:39:52 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:31.526 00:14:31.526 real 0m12.386s 00:14:31.526 user 0m23.340s 00:14:31.526 sys 0m4.986s 00:14:31.526 17:39:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.526 17:39:52 -- common/autotest_common.sh@10 -- # set +x 00:14:31.526 ************************************ 00:14:31.526 END TEST nvmf_host_management 00:14:31.526 ************************************ 00:14:31.526 17:39:52 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:31.526 17:39:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:31.526 17:39:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.526 17:39:52 -- common/autotest_common.sh@10 -- # set +x 00:14:31.526 ************************************ 00:14:31.526 START TEST nvmf_lvol 00:14:31.526 ************************************ 00:14:31.526 17:39:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:31.526 * Looking for test storage... 00:14:31.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.526 17:39:53 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.526 17:39:53 -- nvmf/common.sh@7 -- # uname -s 00:14:31.526 17:39:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.526 17:39:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.526 17:39:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.526 17:39:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.526 17:39:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.526 17:39:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.526 17:39:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.527 17:39:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.527 17:39:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.527 17:39:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.527 17:39:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.527 17:39:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.527 17:39:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.527 17:39:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.527 17:39:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.527 17:39:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.527 17:39:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.527 17:39:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.527 17:39:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.527 17:39:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.527 17:39:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.527 17:39:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.527 17:39:53 -- paths/export.sh@5 -- # export PATH 00:14:31.527 17:39:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.527 17:39:53 -- nvmf/common.sh@46 -- # : 0 00:14:31.527 17:39:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.527 17:39:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.527 17:39:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.527 17:39:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.527 17:39:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.527 17:39:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.527 17:39:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.527 17:39:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.527 17:39:53 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:31.527 17:39:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.527 17:39:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.527 17:39:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.527 17:39:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.527 17:39:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.527 17:39:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.527 17:39:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.527 17:39:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.527 17:39:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:31.527 17:39:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:31.527 17:39:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:31.527 17:39:53 -- common/autotest_common.sh@10 -- # set +x 00:14:36.807 17:39:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:36.808 17:39:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:36.808 17:39:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:36.808 17:39:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:36.808 17:39:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:36.808 17:39:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:36.808 17:39:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:36.808 17:39:57 -- nvmf/common.sh@294 -- # net_devs=() 00:14:36.808 17:39:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:36.808 17:39:57 -- nvmf/common.sh@295 -- # e810=() 00:14:36.808 17:39:57 -- nvmf/common.sh@295 -- # local -ga e810 00:14:36.808 17:39:57 -- nvmf/common.sh@296 -- # x722=() 00:14:36.808 17:39:57 -- nvmf/common.sh@296 -- # local -ga x722 00:14:36.808 17:39:57 -- nvmf/common.sh@297 -- # mlx=() 00:14:36.808 17:39:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:36.808 17:39:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.808 17:39:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:36.808 17:39:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:36.808 17:39:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:36.808 17:39:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.808 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.808 17:39:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:36.808 17:39:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.808 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.808 17:39:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:36.808 17:39:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.808 17:39:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.808 17:39:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.808 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.808 17:39:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.808 17:39:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:36.808 17:39:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.808 17:39:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.808 17:39:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.808 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.808 17:39:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.808 17:39:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:36.808 17:39:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:36.808 17:39:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:36.808 17:39:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.808 17:39:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.808 17:39:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.808 17:39:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:36.808 17:39:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.808 17:39:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.808 17:39:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:36.808 17:39:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.808 17:39:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.808 17:39:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:36.808 17:39:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:36.808 17:39:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.808 17:39:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.808 17:39:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.808 17:39:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.808 17:39:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:36.808 17:39:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.808 17:39:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.808 17:39:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.808 17:39:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:36.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:14:36.808 00:14:36.808 --- 10.0.0.2 ping statistics --- 00:14:36.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.808 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:36.808 17:39:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:14:36.808 00:14:36.808 --- 10.0.0.1 ping statistics --- 00:14:36.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.808 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:14:36.808 17:39:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.808 17:39:58 -- nvmf/common.sh@410 -- # return 0 00:14:36.808 17:39:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:36.808 17:39:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.808 17:39:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:36.808 17:39:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:36.808 17:39:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.808 17:39:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:36.808 17:39:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:36.808 17:39:58 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:36.808 17:39:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:36.808 17:39:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:36.808 17:39:58 -- common/autotest_common.sh@10 -- # set +x 00:14:36.808 17:39:58 -- nvmf/common.sh@469 -- # nvmfpid=571148 00:14:36.808 17:39:58 -- nvmf/common.sh@470 -- # waitforlisten 571148 00:14:36.808 17:39:58 -- common/autotest_common.sh@819 -- # '[' -z 571148 ']' 00:14:36.808 17:39:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.808 17:39:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:36.808 17:39:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.808 17:39:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:36.808 17:39:58 -- common/autotest_common.sh@10 -- # set +x 00:14:36.808 17:39:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:36.808 [2024-07-24 17:39:58.074011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:36.808 [2024-07-24 17:39:58.074068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.808 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.808 [2024-07-24 17:39:58.131288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:36.808 [2024-07-24 17:39:58.209430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:36.808 [2024-07-24 17:39:58.209539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.808 [2024-07-24 17:39:58.209546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.808 [2024-07-24 17:39:58.209552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.808 [2024-07-24 17:39:58.209593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.808 [2024-07-24 17:39:58.209608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.808 [2024-07-24 17:39:58.209611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.377 17:39:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:37.377 17:39:58 -- common/autotest_common.sh@852 -- # return 0 00:14:37.377 17:39:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:37.377 17:39:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:37.377 17:39:58 -- common/autotest_common.sh@10 -- # set +x 00:14:37.377 17:39:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.377 17:39:58 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:37.637 [2024-07-24 17:39:59.059522] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.637 17:39:59 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:38.002 17:39:59 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:38.002 17:39:59 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:38.002 17:39:59 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:38.002 17:39:59 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:38.261 17:39:59 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:38.261 17:39:59 -- target/nvmf_lvol.sh@29 -- # lvs=efa4fc7e-e718-40e5-be27-1bc05834c232 00:14:38.261 17:39:59 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u efa4fc7e-e718-40e5-be27-1bc05834c232 lvol 20 00:14:38.524 17:39:59 -- target/nvmf_lvol.sh@32 -- # lvol=bcd0bba4-ffb8-449a-b2ba-c24a501d4f0f 00:14:38.524 17:39:59 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.782 17:40:00 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcd0bba4-ffb8-449a-b2ba-c24a501d4f0f 00:14:38.782 17:40:00 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:39.041 [2024-07-24 17:40:00.515083] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.041 17:40:00 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.299 17:40:00 -- target/nvmf_lvol.sh@42 -- # perf_pid=571652 00:14:39.299 17:40:00 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:39.299 17:40:00 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:39.299 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.245 17:40:01 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bcd0bba4-ffb8-449a-b2ba-c24a501d4f0f MY_SNAPSHOT 00:14:40.504 17:40:01 -- target/nvmf_lvol.sh@47 -- # snapshot=3ce1daa2-28dc-4e1e-9130-b4eb51fa22e0 00:14:40.504 17:40:01 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bcd0bba4-ffb8-449a-b2ba-c24a501d4f0f 30 00:14:40.763 17:40:02 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3ce1daa2-28dc-4e1e-9130-b4eb51fa22e0 MY_CLONE 00:14:40.763 17:40:02 -- target/nvmf_lvol.sh@49 -- # clone=730eecc3-07f3-476e-9b97-a2e5ae7608cb 00:14:40.763 17:40:02 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 730eecc3-07f3-476e-9b97-a2e5ae7608cb 00:14:41.332 17:40:02 -- target/nvmf_lvol.sh@53 -- # wait 571652 00:14:51.335 Initializing NVMe Controllers 00:14:51.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:51.335 Controller IO queue size 128, less than required. 00:14:51.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:51.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:51.335 Initialization complete. Launching workers. 00:14:51.335 ======================================================== 00:14:51.335 Latency(us) 00:14:51.335 Device Information : IOPS MiB/s Average min max 00:14:51.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11832.20 46.22 10821.90 1722.74 60459.50 00:14:51.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11643.20 45.48 10997.01 3513.53 50352.48 00:14:51.335 ======================================================== 00:14:51.335 Total : 23475.40 91.70 10908.75 1722.74 60459.50 00:14:51.335 00:14:51.335 17:40:11 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:51.336 17:40:11 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcd0bba4-ffb8-449a-b2ba-c24a501d4f0f 00:14:51.336 17:40:11 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efa4fc7e-e718-40e5-be27-1bc05834c232 00:14:51.336 17:40:11 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:51.336 17:40:11 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:51.336 17:40:11 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:51.336 17:40:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.336 17:40:11 -- nvmf/common.sh@116 -- # sync 00:14:51.336 17:40:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.336 17:40:11 -- nvmf/common.sh@119 -- # set +e 00:14:51.336 17:40:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.336 17:40:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.336 rmmod nvme_tcp 00:14:51.336 rmmod nvme_fabrics 00:14:51.336 rmmod nvme_keyring 00:14:51.336 17:40:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.336 17:40:11 -- nvmf/common.sh@123 -- # set -e 00:14:51.336 17:40:11 -- nvmf/common.sh@124 -- # return 0 00:14:51.336 17:40:11 -- nvmf/common.sh@477 -- # '[' -n 571148 ']' 00:14:51.336 17:40:11 -- nvmf/common.sh@478 -- # killprocess 571148 00:14:51.336 17:40:11 -- common/autotest_common.sh@926 -- # '[' -z 571148 ']' 00:14:51.336 17:40:11 -- common/autotest_common.sh@930 -- # kill -0 571148 00:14:51.336 17:40:11 -- common/autotest_common.sh@931 -- # uname 00:14:51.336 17:40:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.336 17:40:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 571148 00:14:51.336 17:40:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:51.336 17:40:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:51.336 17:40:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 571148' 00:14:51.336 killing process with pid 571148 00:14:51.336 17:40:11 -- common/autotest_common.sh@945 -- # kill 571148 00:14:51.336 17:40:11 -- common/autotest_common.sh@950 -- # wait 571148 00:14:51.336 17:40:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.336 17:40:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.336 17:40:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.336 17:40:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.336 17:40:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.336 17:40:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.336 17:40:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.336 17:40:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.716 17:40:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:52.716 00:14:52.716 real 0m21.145s 00:14:52.716 user 1m3.442s 00:14:52.716 sys 0m6.577s 00:14:52.716 17:40:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.716 17:40:14 -- common/autotest_common.sh@10 -- # set +x 00:14:52.716 ************************************ 00:14:52.716 END TEST nvmf_lvol 00:14:52.716 ************************************ 00:14:52.716 17:40:14 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.716 17:40:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:52.716 17:40:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.716 17:40:14 -- common/autotest_common.sh@10 -- # set +x 00:14:52.716 ************************************ 00:14:52.716 START TEST nvmf_lvs_grow 00:14:52.716 ************************************ 00:14:52.716 17:40:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.716 * Looking for test storage... 00:14:52.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.716 17:40:14 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.716 17:40:14 -- nvmf/common.sh@7 -- # uname -s 00:14:52.716 17:40:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.716 17:40:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.716 17:40:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.716 17:40:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.716 17:40:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.716 17:40:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.716 17:40:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.716 17:40:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.716 17:40:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.716 17:40:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.716 17:40:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.716 17:40:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.716 17:40:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.716 17:40:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.716 17:40:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.716 17:40:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.716 17:40:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.716 17:40:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.716 17:40:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.716 17:40:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.716 17:40:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.716 17:40:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.716 17:40:14 -- paths/export.sh@5 -- # export PATH 00:14:52.716 17:40:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.716 17:40:14 -- nvmf/common.sh@46 -- # : 0 00:14:52.716 17:40:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.716 17:40:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.716 17:40:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.716 17:40:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.716 17:40:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.716 17:40:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.716 17:40:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.716 17:40:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.716 17:40:14 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.716 17:40:14 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.716 17:40:14 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:52.716 17:40:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.716 17:40:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.716 17:40:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.716 17:40:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.716 17:40:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.716 17:40:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.716 17:40:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.716 17:40:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.716 17:40:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:52.716 17:40:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:52.716 17:40:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:52.716 17:40:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.199 17:40:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:58.199 17:40:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:58.199 17:40:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:58.199 17:40:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:58.199 17:40:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:58.199 17:40:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:58.199 17:40:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:58.199 17:40:19 -- nvmf/common.sh@294 -- # net_devs=() 00:14:58.199 17:40:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:58.199 17:40:19 -- nvmf/common.sh@295 -- # e810=() 00:14:58.199 17:40:19 -- nvmf/common.sh@295 -- # local -ga e810 00:14:58.199 17:40:19 -- nvmf/common.sh@296 -- # x722=() 00:14:58.199 17:40:19 -- nvmf/common.sh@296 -- # local -ga x722 00:14:58.199 17:40:19 -- nvmf/common.sh@297 -- # mlx=() 00:14:58.199 17:40:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:58.199 17:40:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.199 17:40:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.199 17:40:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:58.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:58.199 17:40:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.199 17:40:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:58.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:58.199 17:40:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.199 17:40:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.199 17:40:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.199 17:40:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:58.199 Found net devices under 0000:86:00.0: cvl_0_0 00:14:58.199 17:40:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.199 17:40:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.199 17:40:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.199 17:40:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:58.199 Found net devices under 0000:86:00.1: cvl_0_1 00:14:58.199 17:40:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:58.199 17:40:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:58.199 17:40:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.199 17:40:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.199 17:40:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:58.199 17:40:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.199 17:40:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.199 17:40:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:58.199 17:40:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.199 17:40:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.199 17:40:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:58.199 17:40:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:58.199 17:40:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.199 17:40:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.199 17:40:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.199 17:40:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.199 17:40:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:58.199 17:40:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.199 17:40:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.199 17:40:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.199 17:40:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:58.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:14:58.199 00:14:58.199 --- 10.0.0.2 ping statistics --- 00:14:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.199 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:58.199 17:40:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:14:58.199 00:14:58.199 --- 10.0.0.1 ping statistics --- 00:14:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.199 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:14:58.199 17:40:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.199 17:40:19 -- nvmf/common.sh@410 -- # return 0 00:14:58.199 17:40:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.199 17:40:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.199 17:40:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.199 17:40:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.199 17:40:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.199 17:40:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:58.199 17:40:19 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:58.199 17:40:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.199 17:40:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:58.199 17:40:19 -- common/autotest_common.sh@10 -- # set +x 00:14:58.199 17:40:19 -- nvmf/common.sh@469 -- # nvmfpid=576839 00:14:58.199 17:40:19 -- nvmf/common.sh@470 -- # waitforlisten 576839 00:14:58.199 17:40:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:58.199 17:40:19 -- common/autotest_common.sh@819 -- # '[' -z 576839 ']' 00:14:58.199 17:40:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.199 17:40:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.199 17:40:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.199 17:40:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.199 17:40:19 -- common/autotest_common.sh@10 -- # set +x 00:14:58.199 [2024-07-24 17:40:19.582752] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:58.199 [2024-07-24 17:40:19.582796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.199 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.199 [2024-07-24 17:40:19.640805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.199 [2024-07-24 17:40:19.717882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.199 [2024-07-24 17:40:19.717988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.199 [2024-07-24 17:40:19.717996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.199 [2024-07-24 17:40:19.718005] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.199 [2024-07-24 17:40:19.718020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.137 17:40:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.137 17:40:20 -- common/autotest_common.sh@852 -- # return 0 00:14:59.137 17:40:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.137 17:40:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:59.138 17:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 17:40:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:59.138 [2024-07-24 17:40:20.561868] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:59.138 17:40:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:59.138 17:40:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.138 17:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 ************************************ 00:14:59.138 START TEST lvs_grow_clean 00:14:59.138 ************************************ 00:14:59.138 17:40:20 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.138 17:40:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.397 17:40:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:59.397 17:40:20 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:59.397 17:40:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=77ef852b-4361-47d6-800a-63b07311b64e 00:14:59.397 17:40:20 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:14:59.397 17:40:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:59.656 17:40:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:59.656 17:40:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:59.657 17:40:21 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77ef852b-4361-47d6-800a-63b07311b64e lvol 150 00:14:59.915 17:40:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2524e88c-971b-4ebe-825b-a601a5f72a38 00:14:59.916 17:40:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.916 17:40:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:59.916 [2024-07-24 17:40:21.452558] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:59.916 [2024-07-24 17:40:21.452605] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:59.916 true 00:14:59.916 17:40:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:14:59.916 17:40:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:00.175 17:40:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:00.175 17:40:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:00.434 17:40:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2524e88c-971b-4ebe-825b-a601a5f72a38 00:15:00.434 17:40:21 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:00.693 [2024-07-24 17:40:22.102523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.693 17:40:22 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.693 17:40:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=577344 00:15:00.693 17:40:22 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:00.693 17:40:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.693 17:40:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 577344 /var/tmp/bdevperf.sock 00:15:00.693 17:40:22 -- common/autotest_common.sh@819 -- # '[' -z 577344 ']' 00:15:00.693 17:40:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.694 17:40:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.694 17:40:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.694 17:40:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.694 17:40:22 -- common/autotest_common.sh@10 -- # set +x 00:15:00.953 [2024-07-24 17:40:22.320781] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:00.953 [2024-07-24 17:40:22.320829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577344 ] 00:15:00.953 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.953 [2024-07-24 17:40:22.373869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.953 [2024-07-24 17:40:22.449953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.523 17:40:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.523 17:40:23 -- common/autotest_common.sh@852 -- # return 0 00:15:01.523 17:40:23 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:01.783 Nvme0n1 00:15:02.043 17:40:23 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:02.043 [ 00:15:02.043 { 00:15:02.043 "name": "Nvme0n1", 00:15:02.043 "aliases": [ 00:15:02.043 "2524e88c-971b-4ebe-825b-a601a5f72a38" 00:15:02.043 ], 00:15:02.043 "product_name": "NVMe disk", 00:15:02.043 "block_size": 4096, 00:15:02.043 "num_blocks": 38912, 00:15:02.043 "uuid": "2524e88c-971b-4ebe-825b-a601a5f72a38", 00:15:02.043 "assigned_rate_limits": { 00:15:02.043 "rw_ios_per_sec": 0, 00:15:02.043 "rw_mbytes_per_sec": 0, 00:15:02.043 "r_mbytes_per_sec": 0, 00:15:02.043 "w_mbytes_per_sec": 0 00:15:02.043 }, 00:15:02.043 "claimed": false, 00:15:02.043 "zoned": false, 00:15:02.043 "supported_io_types": { 00:15:02.043 "read": true, 00:15:02.043 "write": true, 00:15:02.043 "unmap": true, 00:15:02.043 "write_zeroes": true, 00:15:02.043 "flush": true, 00:15:02.043 "reset": true, 00:15:02.043 "compare": true, 00:15:02.043 "compare_and_write": true, 00:15:02.043 "abort": true, 00:15:02.043 "nvme_admin": true, 00:15:02.043 "nvme_io": true 00:15:02.043 }, 00:15:02.043 "driver_specific": { 00:15:02.043 "nvme": [ 00:15:02.043 { 00:15:02.043 "trid": { 00:15:02.043 "trtype": "TCP", 00:15:02.043 "adrfam": "IPv4", 00:15:02.043 "traddr": "10.0.0.2", 00:15:02.043 "trsvcid": "4420", 00:15:02.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:02.043 }, 00:15:02.043 "ctrlr_data": { 00:15:02.043 "cntlid": 1, 00:15:02.043 "vendor_id": "0x8086", 00:15:02.043 "model_number": "SPDK bdev Controller", 00:15:02.043 "serial_number": "SPDK0", 00:15:02.043 "firmware_revision": "24.01.1", 00:15:02.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:02.043 "oacs": { 00:15:02.043 "security": 0, 00:15:02.043 "format": 0, 00:15:02.043 "firmware": 0, 00:15:02.043 "ns_manage": 0 00:15:02.043 }, 00:15:02.043 "multi_ctrlr": true, 00:15:02.043 "ana_reporting": false 00:15:02.043 }, 00:15:02.043 "vs": { 00:15:02.043 "nvme_version": "1.3" 00:15:02.043 }, 00:15:02.043 "ns_data": { 00:15:02.043 "id": 1, 00:15:02.043 "can_share": true 00:15:02.043 } 00:15:02.043 } 00:15:02.043 ], 00:15:02.043 "mp_policy": "active_passive" 00:15:02.043 } 00:15:02.043 } 00:15:02.043 ] 00:15:02.043 17:40:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=577583 00:15:02.043 17:40:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:02.043 17:40:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.302 Running I/O for 10 seconds... 00:15:03.241 Latency(us) 00:15:03.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.241 Nvme0n1 : 1.00 22800.00 89.06 0.00 0.00 0.00 0.00 0.00 00:15:03.241 =================================================================================================================== 00:15:03.241 Total : 22800.00 89.06 0.00 0.00 0.00 0.00 0.00 00:15:03.241 00:15:04.178 17:40:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:04.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.178 Nvme0n1 : 2.00 23178.50 90.54 0.00 0.00 0.00 0.00 0.00 00:15:04.178 =================================================================================================================== 00:15:04.178 Total : 23178.50 90.54 0.00 0.00 0.00 0.00 0.00 00:15:04.178 00:15:04.178 true 00:15:04.178 17:40:25 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:04.178 17:40:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:04.437 17:40:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:04.437 17:40:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:04.437 17:40:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 577583 00:15:05.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.376 Nvme0n1 : 3.00 23104.67 90.25 0.00 0.00 0.00 0.00 0.00 00:15:05.376 =================================================================================================================== 00:15:05.376 Total : 23104.67 90.25 0.00 0.00 0.00 0.00 0.00 00:15:05.376 00:15:06.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.354 Nvme0n1 : 4.00 23108.25 90.27 0.00 0.00 0.00 0.00 0.00 00:15:06.354 =================================================================================================================== 00:15:06.354 Total : 23108.25 90.27 0.00 0.00 0.00 0.00 0.00 00:15:06.354 00:15:07.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.292 Nvme0n1 : 5.00 23158.00 90.46 0.00 0.00 0.00 0.00 0.00 00:15:07.292 =================================================================================================================== 00:15:07.292 Total : 23158.00 90.46 0.00 0.00 0.00 0.00 0.00 00:15:07.292 00:15:08.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.230 Nvme0n1 : 6.00 23163.33 90.48 0.00 0.00 0.00 0.00 0.00 00:15:08.230 =================================================================================================================== 00:15:08.230 Total : 23163.33 90.48 0.00 0.00 0.00 0.00 0.00 00:15:08.230 00:15:09.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.168 Nvme0n1 : 7.00 23141.86 90.40 0.00 0.00 0.00 0.00 0.00 00:15:09.168 =================================================================================================================== 00:15:09.168 Total : 23141.86 90.40 0.00 0.00 0.00 0.00 0.00 00:15:09.168 00:15:10.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.107 Nvme0n1 : 8.00 23213.12 90.68 0.00 0.00 0.00 0.00 0.00 00:15:10.107 =================================================================================================================== 00:15:10.107 Total : 23213.12 90.68 0.00 0.00 0.00 0.00 0.00 00:15:10.107 00:15:11.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.487 Nvme0n1 : 9.00 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:15:11.487 =================================================================================================================== 00:15:11.487 Total : 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:15:11.487 00:15:12.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.426 Nvme0n1 : 10.00 23292.30 90.99 0.00 0.00 0.00 0.00 0.00 00:15:12.426 =================================================================================================================== 00:15:12.426 Total : 23292.30 90.99 0.00 0.00 0.00 0.00 0.00 00:15:12.426 00:15:12.426 00:15:12.426 Latency(us) 00:15:12.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.426 Nvme0n1 : 10.00 23293.58 90.99 0.00 0.00 5491.57 2721.17 29861.62 00:15:12.426 =================================================================================================================== 00:15:12.426 Total : 23293.58 90.99 0.00 0.00 5491.57 2721.17 29861.62 00:15:12.426 0 00:15:12.426 17:40:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 577344 00:15:12.426 17:40:33 -- common/autotest_common.sh@926 -- # '[' -z 577344 ']' 00:15:12.426 17:40:33 -- common/autotest_common.sh@930 -- # kill -0 577344 00:15:12.426 17:40:33 -- common/autotest_common.sh@931 -- # uname 00:15:12.426 17:40:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.426 17:40:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 577344 00:15:12.426 17:40:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:12.426 17:40:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:12.426 17:40:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 577344' 00:15:12.426 killing process with pid 577344 00:15:12.426 17:40:33 -- common/autotest_common.sh@945 -- # kill 577344 00:15:12.426 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.426 00:15:12.426 Latency(us) 00:15:12.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.426 =================================================================================================================== 00:15:12.426 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.426 17:40:33 -- common/autotest_common.sh@950 -- # wait 577344 00:15:12.426 17:40:33 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:12.686 17:40:34 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:12.686 17:40:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:12.945 17:40:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:12.945 17:40:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:12.945 17:40:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.945 [2024-07-24 17:40:34.463584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:12.945 17:40:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:12.945 17:40:34 -- common/autotest_common.sh@640 -- # local es=0 00:15:12.945 17:40:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:12.945 17:40:34 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.945 17:40:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.945 17:40:34 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.945 17:40:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.945 17:40:34 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.945 17:40:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.945 17:40:34 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.945 17:40:34 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:12.945 17:40:34 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:13.205 request: 00:15:13.205 { 00:15:13.205 "uuid": "77ef852b-4361-47d6-800a-63b07311b64e", 00:15:13.205 "method": "bdev_lvol_get_lvstores", 00:15:13.205 "req_id": 1 00:15:13.205 } 00:15:13.205 Got JSON-RPC error response 00:15:13.205 response: 00:15:13.205 { 00:15:13.205 "code": -19, 00:15:13.205 "message": "No such device" 00:15:13.205 } 00:15:13.205 17:40:34 -- common/autotest_common.sh@643 -- # es=1 00:15:13.205 17:40:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:13.205 17:40:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:13.205 17:40:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:13.205 17:40:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.465 aio_bdev 00:15:13.465 17:40:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2524e88c-971b-4ebe-825b-a601a5f72a38 00:15:13.465 17:40:34 -- common/autotest_common.sh@887 -- # local bdev_name=2524e88c-971b-4ebe-825b-a601a5f72a38 00:15:13.465 17:40:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:13.465 17:40:34 -- common/autotest_common.sh@889 -- # local i 00:15:13.465 17:40:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:13.465 17:40:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:13.465 17:40:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:13.465 17:40:34 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2524e88c-971b-4ebe-825b-a601a5f72a38 -t 2000 00:15:13.724 [ 00:15:13.724 { 00:15:13.724 "name": "2524e88c-971b-4ebe-825b-a601a5f72a38", 00:15:13.724 "aliases": [ 00:15:13.724 "lvs/lvol" 00:15:13.724 ], 00:15:13.724 "product_name": "Logical Volume", 00:15:13.724 "block_size": 4096, 00:15:13.724 "num_blocks": 38912, 00:15:13.724 "uuid": "2524e88c-971b-4ebe-825b-a601a5f72a38", 00:15:13.724 "assigned_rate_limits": { 00:15:13.724 "rw_ios_per_sec": 0, 00:15:13.724 "rw_mbytes_per_sec": 0, 00:15:13.724 "r_mbytes_per_sec": 0, 00:15:13.724 "w_mbytes_per_sec": 0 00:15:13.724 }, 00:15:13.724 "claimed": false, 00:15:13.724 "zoned": false, 00:15:13.724 "supported_io_types": { 00:15:13.724 "read": true, 00:15:13.724 "write": true, 00:15:13.724 "unmap": true, 00:15:13.724 "write_zeroes": true, 00:15:13.724 "flush": false, 00:15:13.724 "reset": true, 00:15:13.724 "compare": false, 00:15:13.724 "compare_and_write": false, 00:15:13.724 "abort": false, 00:15:13.724 "nvme_admin": false, 00:15:13.724 "nvme_io": false 00:15:13.724 }, 00:15:13.724 "driver_specific": { 00:15:13.724 "lvol": { 00:15:13.724 "lvol_store_uuid": "77ef852b-4361-47d6-800a-63b07311b64e", 00:15:13.724 "base_bdev": "aio_bdev", 00:15:13.724 "thin_provision": false, 00:15:13.724 "snapshot": false, 00:15:13.724 "clone": false, 00:15:13.724 "esnap_clone": false 00:15:13.724 } 00:15:13.724 } 00:15:13.724 } 00:15:13.724 ] 00:15:13.724 17:40:35 -- common/autotest_common.sh@895 -- # return 0 00:15:13.724 17:40:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:13.724 17:40:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:13.724 17:40:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:13.724 17:40:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:13.724 17:40:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:13.984 17:40:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:13.984 17:40:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2524e88c-971b-4ebe-825b-a601a5f72a38 00:15:14.243 17:40:35 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77ef852b-4361-47d6-800a-63b07311b64e 00:15:14.504 17:40:35 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:14.504 00:15:14.504 real 0m15.465s 00:15:14.504 user 0m15.143s 00:15:14.504 sys 0m1.399s 00:15:14.504 17:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.504 17:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.504 ************************************ 00:15:14.504 END TEST lvs_grow_clean 00:15:14.504 ************************************ 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:14.504 17:40:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:14.504 17:40:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.504 17:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.504 ************************************ 00:15:14.504 START TEST lvs_grow_dirty 00:15:14.504 ************************************ 00:15:14.504 17:40:36 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:14.504 17:40:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:14.763 17:40:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.763 17:40:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:14.763 17:40:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:15.022 17:40:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:15.022 17:40:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:15.022 17:40:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 lvol 150 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.281 17:40:36 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:15.541 [2024-07-24 17:40:36.944756] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:15.541 [2024-07-24 17:40:36.944803] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:15.541 true 00:15:15.541 17:40:36 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:15.541 17:40:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:15.541 17:40:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:15.541 17:40:37 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:15.801 17:40:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:16.060 17:40:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:16.060 17:40:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.321 17:40:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=579975 00:15:16.321 17:40:37 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:16.321 17:40:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.321 17:40:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 579975 /var/tmp/bdevperf.sock 00:15:16.321 17:40:37 -- common/autotest_common.sh@819 -- # '[' -z 579975 ']' 00:15:16.321 17:40:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.321 17:40:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.321 17:40:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.321 17:40:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.321 17:40:37 -- common/autotest_common.sh@10 -- # set +x 00:15:16.321 [2024-07-24 17:40:37.809549] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:16.321 [2024-07-24 17:40:37.809600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579975 ] 00:15:16.321 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.321 [2024-07-24 17:40:37.861374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.581 [2024-07-24 17:40:37.932624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.149 17:40:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.149 17:40:38 -- common/autotest_common.sh@852 -- # return 0 00:15:17.149 17:40:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:17.408 Nvme0n1 00:15:17.409 17:40:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:17.668 [ 00:15:17.668 { 00:15:17.668 "name": "Nvme0n1", 00:15:17.668 "aliases": [ 00:15:17.668 "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db" 00:15:17.668 ], 00:15:17.668 "product_name": "NVMe disk", 00:15:17.668 "block_size": 4096, 00:15:17.668 "num_blocks": 38912, 00:15:17.668 "uuid": "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db", 00:15:17.668 "assigned_rate_limits": { 00:15:17.668 "rw_ios_per_sec": 0, 00:15:17.668 "rw_mbytes_per_sec": 0, 00:15:17.668 "r_mbytes_per_sec": 0, 00:15:17.668 "w_mbytes_per_sec": 0 00:15:17.668 }, 00:15:17.668 "claimed": false, 00:15:17.668 "zoned": false, 00:15:17.668 "supported_io_types": { 00:15:17.668 "read": true, 00:15:17.668 "write": true, 00:15:17.668 "unmap": true, 00:15:17.668 "write_zeroes": true, 00:15:17.668 "flush": true, 00:15:17.668 "reset": true, 00:15:17.668 "compare": true, 00:15:17.668 "compare_and_write": true, 00:15:17.668 "abort": true, 00:15:17.668 "nvme_admin": true, 00:15:17.668 "nvme_io": true 00:15:17.668 }, 00:15:17.668 "driver_specific": { 00:15:17.668 "nvme": [ 00:15:17.668 { 00:15:17.668 "trid": { 00:15:17.668 "trtype": "TCP", 00:15:17.668 "adrfam": "IPv4", 00:15:17.668 "traddr": "10.0.0.2", 00:15:17.668 "trsvcid": "4420", 00:15:17.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:17.668 }, 00:15:17.668 "ctrlr_data": { 00:15:17.668 "cntlid": 1, 00:15:17.668 "vendor_id": "0x8086", 00:15:17.668 "model_number": "SPDK bdev Controller", 00:15:17.668 "serial_number": "SPDK0", 00:15:17.668 "firmware_revision": "24.01.1", 00:15:17.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.668 "oacs": { 00:15:17.668 "security": 0, 00:15:17.668 "format": 0, 00:15:17.668 "firmware": 0, 00:15:17.668 "ns_manage": 0 00:15:17.668 }, 00:15:17.668 "multi_ctrlr": true, 00:15:17.668 "ana_reporting": false 00:15:17.668 }, 00:15:17.668 "vs": { 00:15:17.668 "nvme_version": "1.3" 00:15:17.668 }, 00:15:17.668 "ns_data": { 00:15:17.668 "id": 1, 00:15:17.668 "can_share": true 00:15:17.668 } 00:15:17.668 } 00:15:17.668 ], 00:15:17.668 "mp_policy": "active_passive" 00:15:17.668 } 00:15:17.668 } 00:15:17.668 ] 00:15:17.668 17:40:39 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.668 17:40:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=580220 00:15:17.668 17:40:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:17.668 Running I/O for 10 seconds... 00:15:19.047 Latency(us) 00:15:19.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.047 Nvme0n1 : 1.00 22627.00 88.39 0.00 0.00 0.00 0.00 0.00 00:15:19.047 =================================================================================================================== 00:15:19.047 Total : 22627.00 88.39 0.00 0.00 0.00 0.00 0.00 00:15:19.047 00:15:19.616 17:40:41 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:19.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.875 Nvme0n1 : 2.00 22803.50 89.08 0.00 0.00 0.00 0.00 0.00 00:15:19.876 =================================================================================================================== 00:15:19.876 Total : 22803.50 89.08 0.00 0.00 0.00 0.00 0.00 00:15:19.876 00:15:19.876 true 00:15:19.876 17:40:41 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:19.876 17:40:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:20.135 17:40:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:20.135 17:40:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:20.135 17:40:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 580220 00:15:20.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.704 Nvme0n1 : 3.00 22954.67 89.67 0.00 0.00 0.00 0.00 0.00 00:15:20.704 =================================================================================================================== 00:15:20.704 Total : 22954.67 89.67 0.00 0.00 0.00 0.00 0.00 00:15:20.704 00:15:21.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.640 Nvme0n1 : 4.00 22908.75 89.49 0.00 0.00 0.00 0.00 0.00 00:15:21.640 =================================================================================================================== 00:15:21.640 Total : 22908.75 89.49 0.00 0.00 0.00 0.00 0.00 00:15:21.640 00:15:23.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.029 Nvme0n1 : 5.00 23111.40 90.28 0.00 0.00 0.00 0.00 0.00 00:15:23.029 =================================================================================================================== 00:15:23.029 Total : 23111.40 90.28 0.00 0.00 0.00 0.00 0.00 00:15:23.029 00:15:23.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.968 Nvme0n1 : 6.00 23149.17 90.43 0.00 0.00 0.00 0.00 0.00 00:15:23.968 =================================================================================================================== 00:15:23.968 Total : 23149.17 90.43 0.00 0.00 0.00 0.00 0.00 00:15:23.968 00:15:24.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.944 Nvme0n1 : 7.00 23150.71 90.43 0.00 0.00 0.00 0.00 0.00 00:15:24.944 =================================================================================================================== 00:15:24.944 Total : 23150.71 90.43 0.00 0.00 0.00 0.00 0.00 00:15:24.944 00:15:25.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.882 Nvme0n1 : 8.00 23168.88 90.50 0.00 0.00 0.00 0.00 0.00 00:15:25.882 =================================================================================================================== 00:15:25.882 Total : 23168.88 90.50 0.00 0.00 0.00 0.00 0.00 00:15:25.882 00:15:26.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.821 Nvme0n1 : 9.00 23174.22 90.52 0.00 0.00 0.00 0.00 0.00 00:15:26.821 =================================================================================================================== 00:15:26.821 Total : 23174.22 90.52 0.00 0.00 0.00 0.00 0.00 00:15:26.821 00:15:27.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.759 Nvme0n1 : 10.00 23239.50 90.78 0.00 0.00 0.00 0.00 0.00 00:15:27.759 =================================================================================================================== 00:15:27.759 Total : 23239.50 90.78 0.00 0.00 0.00 0.00 0.00 00:15:27.759 00:15:27.759 00:15:27.759 Latency(us) 00:15:27.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.759 Nvme0n1 : 10.01 23239.90 90.78 0.00 0.00 5504.41 2778.16 22453.20 00:15:27.759 =================================================================================================================== 00:15:27.760 Total : 23239.90 90.78 0.00 0.00 5504.41 2778.16 22453.20 00:15:27.760 0 00:15:27.760 17:40:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 579975 00:15:27.760 17:40:49 -- common/autotest_common.sh@926 -- # '[' -z 579975 ']' 00:15:27.760 17:40:49 -- common/autotest_common.sh@930 -- # kill -0 579975 00:15:27.760 17:40:49 -- common/autotest_common.sh@931 -- # uname 00:15:27.760 17:40:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.760 17:40:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 579975 00:15:27.760 17:40:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:27.760 17:40:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:27.760 17:40:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 579975' 00:15:27.760 killing process with pid 579975 00:15:27.760 17:40:49 -- common/autotest_common.sh@945 -- # kill 579975 00:15:27.760 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.760 00:15:27.760 Latency(us) 00:15:27.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.760 =================================================================================================================== 00:15:27.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.760 17:40:49 -- common/autotest_common.sh@950 -- # wait 579975 00:15:28.020 17:40:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 576839 00:15:28.279 17:40:49 -- target/nvmf_lvs_grow.sh@74 -- # wait 576839 00:15:28.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 576839 Killed "${NVMF_APP[@]}" "$@" 00:15:28.538 17:40:49 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:28.538 17:40:49 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:28.538 17:40:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.538 17:40:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:28.538 17:40:49 -- common/autotest_common.sh@10 -- # set +x 00:15:28.538 17:40:49 -- nvmf/common.sh@469 -- # nvmfpid=582083 00:15:28.538 17:40:49 -- nvmf/common.sh@470 -- # waitforlisten 582083 00:15:28.538 17:40:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:28.538 17:40:49 -- common/autotest_common.sh@819 -- # '[' -z 582083 ']' 00:15:28.538 17:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.538 17:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.538 17:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.538 17:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.538 17:40:49 -- common/autotest_common.sh@10 -- # set +x 00:15:28.538 [2024-07-24 17:40:49.954740] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:28.539 [2024-07-24 17:40:49.954791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.539 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.539 [2024-07-24 17:40:50.012721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.539 [2024-07-24 17:40:50.103617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.539 [2024-07-24 17:40:50.103724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.539 [2024-07-24 17:40:50.103732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.539 [2024-07-24 17:40:50.103738] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.539 [2024-07-24 17:40:50.103758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.478 17:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.478 17:40:50 -- common/autotest_common.sh@852 -- # return 0 00:15:29.478 17:40:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.478 17:40:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.478 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:15:29.478 17:40:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.478 17:40:50 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:29.478 [2024-07-24 17:40:50.945936] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:29.478 [2024-07-24 17:40:50.946037] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:29.478 [2024-07-24 17:40:50.946071] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:29.478 17:40:50 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:29.478 17:40:50 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:29.478 17:40:50 -- common/autotest_common.sh@887 -- # local bdev_name=3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:29.478 17:40:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:29.478 17:40:50 -- common/autotest_common.sh@889 -- # local i 00:15:29.478 17:40:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:29.478 17:40:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:29.478 17:40:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:29.737 17:40:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db -t 2000 00:15:29.737 [ 00:15:29.737 { 00:15:29.737 "name": "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db", 00:15:29.737 "aliases": [ 00:15:29.737 "lvs/lvol" 00:15:29.737 ], 00:15:29.737 "product_name": "Logical Volume", 00:15:29.737 "block_size": 4096, 00:15:29.737 "num_blocks": 38912, 00:15:29.737 "uuid": "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db", 00:15:29.737 "assigned_rate_limits": { 00:15:29.738 "rw_ios_per_sec": 0, 00:15:29.738 "rw_mbytes_per_sec": 0, 00:15:29.738 "r_mbytes_per_sec": 0, 00:15:29.738 "w_mbytes_per_sec": 0 00:15:29.738 }, 00:15:29.738 "claimed": false, 00:15:29.738 "zoned": false, 00:15:29.738 "supported_io_types": { 00:15:29.738 "read": true, 00:15:29.738 "write": true, 00:15:29.738 "unmap": true, 00:15:29.738 "write_zeroes": true, 00:15:29.738 "flush": false, 00:15:29.738 "reset": true, 00:15:29.738 "compare": false, 00:15:29.738 "compare_and_write": false, 00:15:29.738 "abort": false, 00:15:29.738 "nvme_admin": false, 00:15:29.738 "nvme_io": false 00:15:29.738 }, 00:15:29.738 "driver_specific": { 00:15:29.738 "lvol": { 00:15:29.738 "lvol_store_uuid": "bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8", 00:15:29.738 "base_bdev": "aio_bdev", 00:15:29.738 "thin_provision": false, 00:15:29.738 "snapshot": false, 00:15:29.738 "clone": false, 00:15:29.738 "esnap_clone": false 00:15:29.738 } 00:15:29.738 } 00:15:29.738 } 00:15:29.738 ] 00:15:29.738 17:40:51 -- common/autotest_common.sh@895 -- # return 0 00:15:29.738 17:40:51 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:29.738 17:40:51 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:29.997 17:40:51 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:29.997 17:40:51 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:29.997 17:40:51 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:30.257 17:40:51 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:30.257 17:40:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:30.257 [2024-07-24 17:40:51.810460] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:30.257 17:40:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:30.257 17:40:51 -- common/autotest_common.sh@640 -- # local es=0 00:15:30.257 17:40:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:30.257 17:40:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.257 17:40:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:30.257 17:40:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.257 17:40:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:30.257 17:40:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.257 17:40:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:30.257 17:40:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.257 17:40:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:30.257 17:40:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:30.517 request: 00:15:30.517 { 00:15:30.517 "uuid": "bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8", 00:15:30.517 "method": "bdev_lvol_get_lvstores", 00:15:30.517 "req_id": 1 00:15:30.517 } 00:15:30.517 Got JSON-RPC error response 00:15:30.517 response: 00:15:30.517 { 00:15:30.517 "code": -19, 00:15:30.517 "message": "No such device" 00:15:30.517 } 00:15:30.517 17:40:51 -- common/autotest_common.sh@643 -- # es=1 00:15:30.517 17:40:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:30.517 17:40:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:30.517 17:40:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:30.517 17:40:51 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:30.776 aio_bdev 00:15:30.776 17:40:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:30.776 17:40:52 -- common/autotest_common.sh@887 -- # local bdev_name=3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:30.776 17:40:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.776 17:40:52 -- common/autotest_common.sh@889 -- # local i 00:15:30.776 17:40:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.776 17:40:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.776 17:40:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.776 17:40:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db -t 2000 00:15:31.036 [ 00:15:31.036 { 00:15:31.036 "name": "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db", 00:15:31.036 "aliases": [ 00:15:31.036 "lvs/lvol" 00:15:31.036 ], 00:15:31.036 "product_name": "Logical Volume", 00:15:31.036 "block_size": 4096, 00:15:31.036 "num_blocks": 38912, 00:15:31.036 "uuid": "3a3586e7-dabd-4b7e-91e8-9cc58c67f1db", 00:15:31.036 "assigned_rate_limits": { 00:15:31.036 "rw_ios_per_sec": 0, 00:15:31.036 "rw_mbytes_per_sec": 0, 00:15:31.036 "r_mbytes_per_sec": 0, 00:15:31.036 "w_mbytes_per_sec": 0 00:15:31.036 }, 00:15:31.036 "claimed": false, 00:15:31.036 "zoned": false, 00:15:31.036 "supported_io_types": { 00:15:31.036 "read": true, 00:15:31.036 "write": true, 00:15:31.036 "unmap": true, 00:15:31.036 "write_zeroes": true, 00:15:31.036 "flush": false, 00:15:31.036 "reset": true, 00:15:31.036 "compare": false, 00:15:31.036 "compare_and_write": false, 00:15:31.036 "abort": false, 00:15:31.036 "nvme_admin": false, 00:15:31.036 "nvme_io": false 00:15:31.036 }, 00:15:31.036 "driver_specific": { 00:15:31.036 "lvol": { 00:15:31.036 "lvol_store_uuid": "bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8", 00:15:31.036 "base_bdev": "aio_bdev", 00:15:31.036 "thin_provision": false, 00:15:31.036 "snapshot": false, 00:15:31.036 "clone": false, 00:15:31.036 "esnap_clone": false 00:15:31.036 } 00:15:31.036 } 00:15:31.036 } 00:15:31.036 ] 00:15:31.036 17:40:52 -- common/autotest_common.sh@895 -- # return 0 00:15:31.036 17:40:52 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:31.036 17:40:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:31.297 17:40:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:31.297 17:40:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:31.297 17:40:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:31.297 17:40:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:31.297 17:40:52 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a3586e7-dabd-4b7e-91e8-9cc58c67f1db 00:15:31.557 17:40:53 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc5d5bac-7ff4-496d-9ad5-69abc99fa2d8 00:15:31.817 17:40:53 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:31.817 17:40:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:31.817 00:15:31.817 real 0m17.294s 00:15:31.817 user 0m44.079s 00:15:31.817 sys 0m3.846s 00:15:31.817 17:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.817 17:40:53 -- common/autotest_common.sh@10 -- # set +x 00:15:31.817 ************************************ 00:15:31.817 END TEST lvs_grow_dirty 00:15:31.817 ************************************ 00:15:32.077 17:40:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:32.077 17:40:53 -- common/autotest_common.sh@796 -- # type=--id 00:15:32.077 17:40:53 -- common/autotest_common.sh@797 -- # id=0 00:15:32.077 17:40:53 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:32.077 17:40:53 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:32.077 17:40:53 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:32.077 17:40:53 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:32.077 17:40:53 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:32.077 17:40:53 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:32.077 nvmf_trace.0 00:15:32.077 17:40:53 -- common/autotest_common.sh@811 -- # return 0 00:15:32.077 17:40:53 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:32.077 17:40:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:32.077 17:40:53 -- nvmf/common.sh@116 -- # sync 00:15:32.077 17:40:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:32.077 17:40:53 -- nvmf/common.sh@119 -- # set +e 00:15:32.077 17:40:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:32.077 17:40:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:32.077 rmmod nvme_tcp 00:15:32.077 rmmod nvme_fabrics 00:15:32.077 rmmod nvme_keyring 00:15:32.077 17:40:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:32.077 17:40:53 -- nvmf/common.sh@123 -- # set -e 00:15:32.077 17:40:53 -- nvmf/common.sh@124 -- # return 0 00:15:32.077 17:40:53 -- nvmf/common.sh@477 -- # '[' -n 582083 ']' 00:15:32.077 17:40:53 -- nvmf/common.sh@478 -- # killprocess 582083 00:15:32.077 17:40:53 -- common/autotest_common.sh@926 -- # '[' -z 582083 ']' 00:15:32.077 17:40:53 -- common/autotest_common.sh@930 -- # kill -0 582083 00:15:32.077 17:40:53 -- common/autotest_common.sh@931 -- # uname 00:15:32.077 17:40:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.077 17:40:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 582083 00:15:32.077 17:40:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.077 17:40:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.077 17:40:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 582083' 00:15:32.077 killing process with pid 582083 00:15:32.077 17:40:53 -- common/autotest_common.sh@945 -- # kill 582083 00:15:32.077 17:40:53 -- common/autotest_common.sh@950 -- # wait 582083 00:15:32.336 17:40:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:32.336 17:40:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:32.336 17:40:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:32.336 17:40:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.336 17:40:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:32.336 17:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.336 17:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.336 17:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.247 17:40:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:34.247 00:15:34.247 real 0m41.709s 00:15:34.247 user 1m4.967s 00:15:34.247 sys 0m9.535s 00:15:34.247 17:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.247 17:40:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.247 ************************************ 00:15:34.247 END TEST nvmf_lvs_grow 00:15:34.247 ************************************ 00:15:34.508 17:40:55 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.508 17:40:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:34.508 17:40:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:34.508 17:40:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 ************************************ 00:15:34.508 START TEST nvmf_bdev_io_wait 00:15:34.508 ************************************ 00:15:34.508 17:40:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.508 * Looking for test storage... 00:15:34.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.508 17:40:55 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.508 17:40:55 -- nvmf/common.sh@7 -- # uname -s 00:15:34.508 17:40:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.508 17:40:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.508 17:40:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.508 17:40:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.508 17:40:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.508 17:40:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.508 17:40:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.508 17:40:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.508 17:40:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.508 17:40:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.508 17:40:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.508 17:40:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.508 17:40:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.508 17:40:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.508 17:40:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.508 17:40:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.508 17:40:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.508 17:40:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.508 17:40:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.508 17:40:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.508 17:40:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.508 17:40:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.508 17:40:55 -- paths/export.sh@5 -- # export PATH 00:15:34.508 17:40:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.508 17:40:55 -- nvmf/common.sh@46 -- # : 0 00:15:34.508 17:40:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:34.508 17:40:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:34.508 17:40:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:34.508 17:40:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.508 17:40:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.508 17:40:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:34.508 17:40:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:34.508 17:40:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:34.508 17:40:55 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.508 17:40:55 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.508 17:40:55 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:34.508 17:40:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:34.508 17:40:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.508 17:40:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:34.508 17:40:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:34.508 17:40:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:34.508 17:40:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.508 17:40:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.508 17:40:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.508 17:40:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:34.508 17:40:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:34.508 17:40:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:34.508 17:40:55 -- common/autotest_common.sh@10 -- # set +x 00:15:39.789 17:41:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:39.789 17:41:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:39.789 17:41:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:39.789 17:41:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:39.789 17:41:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:39.789 17:41:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:39.789 17:41:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:39.789 17:41:00 -- nvmf/common.sh@294 -- # net_devs=() 00:15:39.789 17:41:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:39.789 17:41:00 -- nvmf/common.sh@295 -- # e810=() 00:15:39.789 17:41:00 -- nvmf/common.sh@295 -- # local -ga e810 00:15:39.789 17:41:00 -- nvmf/common.sh@296 -- # x722=() 00:15:39.789 17:41:00 -- nvmf/common.sh@296 -- # local -ga x722 00:15:39.789 17:41:00 -- nvmf/common.sh@297 -- # mlx=() 00:15:39.789 17:41:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:39.789 17:41:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.789 17:41:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:39.789 17:41:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:39.789 17:41:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:39.789 17:41:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:39.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:39.789 17:41:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:39.789 17:41:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:39.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:39.789 17:41:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:39.789 17:41:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.789 17:41:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.789 17:41:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:39.789 Found net devices under 0000:86:00.0: cvl_0_0 00:15:39.789 17:41:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.789 17:41:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:39.789 17:41:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.789 17:41:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.789 17:41:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:39.789 Found net devices under 0000:86:00.1: cvl_0_1 00:15:39.789 17:41:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.789 17:41:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:39.789 17:41:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:39.789 17:41:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:39.789 17:41:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.789 17:41:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.789 17:41:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.789 17:41:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:39.789 17:41:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.789 17:41:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.789 17:41:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:39.789 17:41:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.789 17:41:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.789 17:41:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:39.789 17:41:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:39.789 17:41:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.789 17:41:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.789 17:41:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.789 17:41:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.789 17:41:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:39.789 17:41:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.789 17:41:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.789 17:41:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.789 17:41:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:39.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:39.789 00:15:39.789 --- 10.0.0.2 ping statistics --- 00:15:39.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.789 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:39.789 17:41:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:15:39.789 00:15:39.789 --- 10.0.0.1 ping statistics --- 00:15:39.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.789 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:15:39.789 17:41:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.789 17:41:01 -- nvmf/common.sh@410 -- # return 0 00:15:39.789 17:41:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.789 17:41:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.789 17:41:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:39.789 17:41:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:39.789 17:41:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.789 17:41:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:39.789 17:41:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:39.789 17:41:01 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:39.790 17:41:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.790 17:41:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:39.790 17:41:01 -- common/autotest_common.sh@10 -- # set +x 00:15:39.790 17:41:01 -- nvmf/common.sh@469 -- # nvmfpid=586156 00:15:39.790 17:41:01 -- nvmf/common.sh@470 -- # waitforlisten 586156 00:15:39.790 17:41:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:39.790 17:41:01 -- common/autotest_common.sh@819 -- # '[' -z 586156 ']' 00:15:39.790 17:41:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.790 17:41:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:39.790 17:41:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.790 17:41:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:39.790 17:41:01 -- common/autotest_common.sh@10 -- # set +x 00:15:39.790 [2024-07-24 17:41:01.216287] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:39.790 [2024-07-24 17:41:01.216327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.790 [2024-07-24 17:41:01.273239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.790 [2024-07-24 17:41:01.352748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:39.790 [2024-07-24 17:41:01.352874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.790 [2024-07-24 17:41:01.352882] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.790 [2024-07-24 17:41:01.352889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.790 [2024-07-24 17:41:01.352923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.790 [2024-07-24 17:41:01.353040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.790 [2024-07-24 17:41:01.353128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.790 [2024-07-24 17:41:01.353129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.729 17:41:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:40.729 17:41:02 -- common/autotest_common.sh@852 -- # return 0 00:15:40.729 17:41:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.729 17:41:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 17:41:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 [2024-07-24 17:41:02.135763] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 Malloc0 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.729 17:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.729 17:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.729 [2024-07-24 17:41:02.197372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.729 17:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=586409 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@30 -- # READ_PID=586412 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # config=() 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:40.729 17:41:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:40.729 { 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme$subsystem", 00:15:40.729 "trtype": "$TEST_TRANSPORT", 00:15:40.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "$NVMF_PORT", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.729 "hdgst": ${hdgst:-false}, 00:15:40.729 "ddgst": ${ddgst:-false} 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 } 00:15:40.729 EOF 00:15:40.729 )") 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=586415 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # config=() 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:40.729 17:41:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:40.729 { 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme$subsystem", 00:15:40.729 "trtype": "$TEST_TRANSPORT", 00:15:40.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "$NVMF_PORT", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.729 "hdgst": ${hdgst:-false}, 00:15:40.729 "ddgst": ${ddgst:-false} 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 } 00:15:40.729 EOF 00:15:40.729 )") 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=586418 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # config=() 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # cat 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@35 -- # sync 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:40.729 17:41:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:40.729 { 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme$subsystem", 00:15:40.729 "trtype": "$TEST_TRANSPORT", 00:15:40.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "$NVMF_PORT", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.729 "hdgst": ${hdgst:-false}, 00:15:40.729 "ddgst": ${ddgst:-false} 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 } 00:15:40.729 EOF 00:15:40.729 )") 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # config=() 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # cat 00:15:40.729 17:41:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:40.729 17:41:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:40.729 { 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme$subsystem", 00:15:40.729 "trtype": "$TEST_TRANSPORT", 00:15:40.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "$NVMF_PORT", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.729 "hdgst": ${hdgst:-false}, 00:15:40.729 "ddgst": ${ddgst:-false} 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 } 00:15:40.729 EOF 00:15:40.729 )") 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # cat 00:15:40.729 17:41:02 -- target/bdev_io_wait.sh@37 -- # wait 586409 00:15:40.729 17:41:02 -- nvmf/common.sh@542 -- # cat 00:15:40.729 17:41:02 -- nvmf/common.sh@544 -- # jq . 00:15:40.729 17:41:02 -- nvmf/common.sh@544 -- # jq . 00:15:40.729 17:41:02 -- nvmf/common.sh@544 -- # jq . 00:15:40.729 17:41:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:40.729 17:41:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme1", 00:15:40.729 "trtype": "tcp", 00:15:40.729 "traddr": "10.0.0.2", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "4420", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.729 "hdgst": false, 00:15:40.729 "ddgst": false 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 }' 00:15:40.729 17:41:02 -- nvmf/common.sh@544 -- # jq . 00:15:40.729 17:41:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:40.729 17:41:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme1", 00:15:40.729 "trtype": "tcp", 00:15:40.729 "traddr": "10.0.0.2", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "4420", 00:15:40.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.729 "hdgst": false, 00:15:40.729 "ddgst": false 00:15:40.729 }, 00:15:40.729 "method": "bdev_nvme_attach_controller" 00:15:40.729 }' 00:15:40.729 17:41:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:40.729 17:41:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:40.729 "params": { 00:15:40.729 "name": "Nvme1", 00:15:40.729 "trtype": "tcp", 00:15:40.729 "traddr": "10.0.0.2", 00:15:40.729 "adrfam": "ipv4", 00:15:40.729 "trsvcid": "4420", 00:15:40.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.730 "hdgst": false, 00:15:40.730 "ddgst": false 00:15:40.730 }, 00:15:40.730 "method": "bdev_nvme_attach_controller" 00:15:40.730 }' 00:15:40.730 17:41:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:40.730 17:41:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:40.730 "params": { 00:15:40.730 "name": "Nvme1", 00:15:40.730 "trtype": "tcp", 00:15:40.730 "traddr": "10.0.0.2", 00:15:40.730 "adrfam": "ipv4", 00:15:40.730 "trsvcid": "4420", 00:15:40.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.730 "hdgst": false, 00:15:40.730 "ddgst": false 00:15:40.730 }, 00:15:40.730 "method": "bdev_nvme_attach_controller" 00:15:40.730 }' 00:15:40.730 [2024-07-24 17:41:02.244033] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.730 [2024-07-24 17:41:02.244090] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:40.730 [2024-07-24 17:41:02.244293] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.730 [2024-07-24 17:41:02.244333] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:40.730 [2024-07-24 17:41:02.245048] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.730 [2024-07-24 17:41:02.245089] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:40.730 [2024-07-24 17:41:02.246378] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.730 [2024-07-24 17:41:02.246419] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:40.730 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.988 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.988 [2024-07-24 17:41:02.428121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.988 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.988 [2024-07-24 17:41:02.501971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:40.988 [2024-07-24 17:41:02.525497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.988 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.247 [2024-07-24 17:41:02.601270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:41.247 [2024-07-24 17:41:02.619911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.247 [2024-07-24 17:41:02.680730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.247 [2024-07-24 17:41:02.705183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:41.247 [2024-07-24 17:41:02.753922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:41.247 Running I/O for 1 seconds... 00:15:41.247 Running I/O for 1 seconds... 00:15:41.247 Running I/O for 1 seconds... 00:15:41.505 Running I/O for 1 seconds... 00:15:42.502 00:15:42.503 Latency(us) 00:15:42.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.503 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:42.503 Nvme1n1 : 1.00 250278.68 977.65 0.00 0.00 509.69 204.80 669.61 00:15:42.503 =================================================================================================================== 00:15:42.503 Total : 250278.68 977.65 0.00 0.00 509.69 204.80 669.61 00:15:42.503 00:15:42.503 Latency(us) 00:15:42.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.503 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:42.503 Nvme1n1 : 1.01 13408.33 52.38 0.00 0.00 9514.43 2550.21 18805.98 00:15:42.503 =================================================================================================================== 00:15:42.503 Total : 13408.33 52.38 0.00 0.00 9514.43 2550.21 18805.98 00:15:42.503 00:15:42.503 Latency(us) 00:15:42.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.503 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:42.503 Nvme1n1 : 1.01 9788.88 38.24 0.00 0.00 13015.97 7351.43 37384.01 00:15:42.503 =================================================================================================================== 00:15:42.503 Total : 9788.88 38.24 0.00 0.00 13015.97 7351.43 37384.01 00:15:42.503 00:15:42.503 Latency(us) 00:15:42.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.503 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:42.503 Nvme1n1 : 1.01 10568.04 41.28 0.00 0.00 12077.52 4872.46 21427.42 00:15:42.503 =================================================================================================================== 00:15:42.503 Total : 10568.04 41.28 0.00 0.00 12077.52 4872.46 21427.42 00:15:42.503 17:41:04 -- target/bdev_io_wait.sh@38 -- # wait 586412 00:15:42.762 17:41:04 -- target/bdev_io_wait.sh@39 -- # wait 586415 00:15:42.762 17:41:04 -- target/bdev_io_wait.sh@40 -- # wait 586418 00:15:42.762 17:41:04 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.762 17:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.762 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:42.762 17:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.762 17:41:04 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:42.762 17:41:04 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:42.762 17:41:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:42.762 17:41:04 -- nvmf/common.sh@116 -- # sync 00:15:42.762 17:41:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:42.762 17:41:04 -- nvmf/common.sh@119 -- # set +e 00:15:42.762 17:41:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:42.762 17:41:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:42.762 rmmod nvme_tcp 00:15:42.762 rmmod nvme_fabrics 00:15:42.762 rmmod nvme_keyring 00:15:42.762 17:41:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:42.762 17:41:04 -- nvmf/common.sh@123 -- # set -e 00:15:42.762 17:41:04 -- nvmf/common.sh@124 -- # return 0 00:15:42.762 17:41:04 -- nvmf/common.sh@477 -- # '[' -n 586156 ']' 00:15:42.762 17:41:04 -- nvmf/common.sh@478 -- # killprocess 586156 00:15:42.762 17:41:04 -- common/autotest_common.sh@926 -- # '[' -z 586156 ']' 00:15:42.762 17:41:04 -- common/autotest_common.sh@930 -- # kill -0 586156 00:15:42.762 17:41:04 -- common/autotest_common.sh@931 -- # uname 00:15:42.762 17:41:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:42.762 17:41:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 586156 00:15:42.762 17:41:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:42.762 17:41:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:42.762 17:41:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 586156' 00:15:42.762 killing process with pid 586156 00:15:42.762 17:41:04 -- common/autotest_common.sh@945 -- # kill 586156 00:15:42.762 17:41:04 -- common/autotest_common.sh@950 -- # wait 586156 00:15:43.022 17:41:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:43.022 17:41:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:43.022 17:41:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:43.022 17:41:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.022 17:41:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:43.022 17:41:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.022 17:41:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.022 17:41:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.561 17:41:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:45.561 00:15:45.561 real 0m10.694s 00:15:45.561 user 0m19.209s 00:15:45.561 sys 0m5.618s 00:15:45.561 17:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.561 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:45.561 ************************************ 00:15:45.561 END TEST nvmf_bdev_io_wait 00:15:45.561 ************************************ 00:15:45.561 17:41:06 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:45.561 17:41:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:45.561 17:41:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.561 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:45.561 ************************************ 00:15:45.561 START TEST nvmf_queue_depth 00:15:45.561 ************************************ 00:15:45.561 17:41:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:45.561 * Looking for test storage... 00:15:45.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.561 17:41:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.561 17:41:06 -- nvmf/common.sh@7 -- # uname -s 00:15:45.561 17:41:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.561 17:41:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.561 17:41:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.561 17:41:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.561 17:41:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.561 17:41:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.561 17:41:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.561 17:41:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.561 17:41:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.561 17:41:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.561 17:41:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.561 17:41:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.561 17:41:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.561 17:41:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.561 17:41:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.562 17:41:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.562 17:41:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.562 17:41:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.562 17:41:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.562 17:41:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.562 17:41:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.562 17:41:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.562 17:41:06 -- paths/export.sh@5 -- # export PATH 00:15:45.562 17:41:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.562 17:41:06 -- nvmf/common.sh@46 -- # : 0 00:15:45.562 17:41:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:45.562 17:41:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:45.562 17:41:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:45.562 17:41:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.562 17:41:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.562 17:41:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:45.562 17:41:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:45.562 17:41:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:45.562 17:41:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:45.562 17:41:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:45.562 17:41:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.562 17:41:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:45.562 17:41:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:45.562 17:41:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.562 17:41:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:45.562 17:41:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:45.562 17:41:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:45.562 17:41:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.562 17:41:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.562 17:41:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.562 17:41:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:45.562 17:41:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:45.562 17:41:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:45.562 17:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:50.841 17:41:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:50.841 17:41:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:50.841 17:41:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:50.841 17:41:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:50.841 17:41:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:50.841 17:41:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:50.841 17:41:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:50.841 17:41:11 -- nvmf/common.sh@294 -- # net_devs=() 00:15:50.841 17:41:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:50.841 17:41:11 -- nvmf/common.sh@295 -- # e810=() 00:15:50.841 17:41:11 -- nvmf/common.sh@295 -- # local -ga e810 00:15:50.841 17:41:11 -- nvmf/common.sh@296 -- # x722=() 00:15:50.841 17:41:11 -- nvmf/common.sh@296 -- # local -ga x722 00:15:50.841 17:41:11 -- nvmf/common.sh@297 -- # mlx=() 00:15:50.841 17:41:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:50.841 17:41:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.841 17:41:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:50.841 17:41:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:50.841 17:41:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:50.841 17:41:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.841 17:41:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:50.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:50.841 17:41:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.841 17:41:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:50.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:50.841 17:41:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:50.841 17:41:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:50.841 17:41:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.841 17:41:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.841 17:41:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.841 17:41:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.841 17:41:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:50.841 Found net devices under 0000:86:00.0: cvl_0_0 00:15:50.841 17:41:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.841 17:41:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.841 17:41:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.841 17:41:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.842 17:41:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.842 17:41:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:50.842 Found net devices under 0000:86:00.1: cvl_0_1 00:15:50.842 17:41:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.842 17:41:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:50.842 17:41:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:50.842 17:41:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:50.842 17:41:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:50.842 17:41:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:50.842 17:41:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.842 17:41:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.842 17:41:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.842 17:41:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:50.842 17:41:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.842 17:41:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.842 17:41:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:50.842 17:41:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.842 17:41:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.842 17:41:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:50.842 17:41:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:50.842 17:41:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.842 17:41:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.842 17:41:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.842 17:41:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.842 17:41:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:50.842 17:41:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.842 17:41:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.842 17:41:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.842 17:41:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:50.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:15:50.842 00:15:50.842 --- 10.0.0.2 ping statistics --- 00:15:50.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.842 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:50.842 17:41:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:15:50.842 00:15:50.842 --- 10.0.0.1 ping statistics --- 00:15:50.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.842 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:15:50.842 17:41:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.842 17:41:11 -- nvmf/common.sh@410 -- # return 0 00:15:50.842 17:41:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:50.842 17:41:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.842 17:41:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:50.842 17:41:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:50.842 17:41:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.842 17:41:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:50.842 17:41:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.842 17:41:11 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:50.842 17:41:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.842 17:41:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:50.842 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.842 17:41:11 -- nvmf/common.sh@469 -- # nvmfpid=590196 00:15:50.842 17:41:11 -- nvmf/common.sh@470 -- # waitforlisten 590196 00:15:50.842 17:41:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:50.842 17:41:11 -- common/autotest_common.sh@819 -- # '[' -z 590196 ']' 00:15:50.842 17:41:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.842 17:41:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.842 17:41:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.842 17:41:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.842 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.842 [2024-07-24 17:41:11.966181] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:50.842 [2024-07-24 17:41:11.966224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.842 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.842 [2024-07-24 17:41:12.022734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.842 [2024-07-24 17:41:12.091848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.842 [2024-07-24 17:41:12.091956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.842 [2024-07-24 17:41:12.091963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.842 [2024-07-24 17:41:12.091969] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.842 [2024-07-24 17:41:12.091985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.413 17:41:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:51.413 17:41:12 -- common/autotest_common.sh@852 -- # return 0 00:15:51.413 17:41:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:51.413 17:41:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:51.413 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 17:41:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.413 17:41:12 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.413 17:41:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.413 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 [2024-07-24 17:41:12.798569] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.413 17:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.413 17:41:12 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.413 17:41:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.413 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 Malloc0 00:15:51.414 17:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.414 17:41:12 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.414 17:41:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.414 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.414 17:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.414 17:41:12 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.414 17:41:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.414 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.414 17:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.414 17:41:12 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.414 17:41:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.414 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.414 [2024-07-24 17:41:12.849758] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.414 17:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.414 17:41:12 -- target/queue_depth.sh@30 -- # bdevperf_pid=590264 00:15:51.414 17:41:12 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.414 17:41:12 -- target/queue_depth.sh@33 -- # waitforlisten 590264 /var/tmp/bdevperf.sock 00:15:51.414 17:41:12 -- common/autotest_common.sh@819 -- # '[' -z 590264 ']' 00:15:51.414 17:41:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.414 17:41:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:51.414 17:41:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.414 17:41:12 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:51.414 17:41:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:51.414 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.414 [2024-07-24 17:41:12.894230] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:51.414 [2024-07-24 17:41:12.894273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590264 ] 00:15:51.414 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.414 [2024-07-24 17:41:12.947784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.674 [2024-07-24 17:41:13.027740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.241 17:41:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:52.241 17:41:13 -- common/autotest_common.sh@852 -- # return 0 00:15:52.241 17:41:13 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.241 17:41:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:52.241 17:41:13 -- common/autotest_common.sh@10 -- # set +x 00:15:52.241 NVMe0n1 00:15:52.241 17:41:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:52.241 17:41:13 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.499 Running I/O for 10 seconds... 00:16:02.482 00:16:02.482 Latency(us) 00:16:02.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:02.482 Verification LBA range: start 0x0 length 0x4000 00:16:02.482 NVMe0n1 : 10.05 17970.27 70.20 0.00 0.00 56814.47 12252.38 59267.34 00:16:02.482 =================================================================================================================== 00:16:02.482 Total : 17970.27 70.20 0.00 0.00 56814.47 12252.38 59267.34 00:16:02.482 0 00:16:02.482 17:41:23 -- target/queue_depth.sh@39 -- # killprocess 590264 00:16:02.482 17:41:23 -- common/autotest_common.sh@926 -- # '[' -z 590264 ']' 00:16:02.482 17:41:23 -- common/autotest_common.sh@930 -- # kill -0 590264 00:16:02.482 17:41:23 -- common/autotest_common.sh@931 -- # uname 00:16:02.482 17:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.482 17:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 590264 00:16:02.482 17:41:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:02.482 17:41:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:02.482 17:41:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 590264' 00:16:02.482 killing process with pid 590264 00:16:02.482 17:41:24 -- common/autotest_common.sh@945 -- # kill 590264 00:16:02.482 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.482 00:16:02.482 Latency(us) 00:16:02.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.482 =================================================================================================================== 00:16:02.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.482 17:41:24 -- common/autotest_common.sh@950 -- # wait 590264 00:16:02.741 17:41:24 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:02.741 17:41:24 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:02.741 17:41:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.741 17:41:24 -- nvmf/common.sh@116 -- # sync 00:16:02.741 17:41:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.741 17:41:24 -- nvmf/common.sh@119 -- # set +e 00:16:02.741 17:41:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.741 17:41:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.741 rmmod nvme_tcp 00:16:02.741 rmmod nvme_fabrics 00:16:02.741 rmmod nvme_keyring 00:16:02.741 17:41:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.741 17:41:24 -- nvmf/common.sh@123 -- # set -e 00:16:02.741 17:41:24 -- nvmf/common.sh@124 -- # return 0 00:16:02.741 17:41:24 -- nvmf/common.sh@477 -- # '[' -n 590196 ']' 00:16:02.741 17:41:24 -- nvmf/common.sh@478 -- # killprocess 590196 00:16:02.741 17:41:24 -- common/autotest_common.sh@926 -- # '[' -z 590196 ']' 00:16:02.741 17:41:24 -- common/autotest_common.sh@930 -- # kill -0 590196 00:16:02.741 17:41:24 -- common/autotest_common.sh@931 -- # uname 00:16:02.741 17:41:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.741 17:41:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 590196 00:16:03.000 17:41:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:03.000 17:41:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:03.000 17:41:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 590196' 00:16:03.000 killing process with pid 590196 00:16:03.000 17:41:24 -- common/autotest_common.sh@945 -- # kill 590196 00:16:03.000 17:41:24 -- common/autotest_common.sh@950 -- # wait 590196 00:16:03.000 17:41:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:03.000 17:41:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:03.000 17:41:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:03.000 17:41:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.000 17:41:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:03.000 17:41:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.000 17:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.000 17:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.539 17:41:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:05.539 00:16:05.539 real 0m20.030s 00:16:05.539 user 0m24.656s 00:16:05.539 sys 0m5.524s 00:16:05.539 17:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.539 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.539 ************************************ 00:16:05.539 END TEST nvmf_queue_depth 00:16:05.539 ************************************ 00:16:05.539 17:41:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:05.539 17:41:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.539 17:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.539 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.539 ************************************ 00:16:05.539 START TEST nvmf_multipath 00:16:05.539 ************************************ 00:16:05.540 17:41:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:05.540 * Looking for test storage... 00:16:05.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.540 17:41:26 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.540 17:41:26 -- nvmf/common.sh@7 -- # uname -s 00:16:05.540 17:41:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.540 17:41:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.540 17:41:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.540 17:41:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.540 17:41:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.540 17:41:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.540 17:41:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.540 17:41:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.540 17:41:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.540 17:41:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.540 17:41:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.540 17:41:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.540 17:41:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.540 17:41:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.540 17:41:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.540 17:41:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.540 17:41:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.540 17:41:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.540 17:41:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.540 17:41:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.540 17:41:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.540 17:41:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.540 17:41:26 -- paths/export.sh@5 -- # export PATH 00:16:05.540 17:41:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.540 17:41:26 -- nvmf/common.sh@46 -- # : 0 00:16:05.540 17:41:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.540 17:41:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.540 17:41:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.540 17:41:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.540 17:41:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.540 17:41:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.540 17:41:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.540 17:41:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.540 17:41:26 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.540 17:41:26 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.540 17:41:26 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:05.540 17:41:26 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:05.540 17:41:26 -- target/multipath.sh@43 -- # nvmftestinit 00:16:05.540 17:41:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:05.540 17:41:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.540 17:41:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:05.540 17:41:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:05.540 17:41:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:05.540 17:41:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.540 17:41:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.540 17:41:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.540 17:41:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:05.540 17:41:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:05.540 17:41:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:05.540 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 17:41:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.822 17:41:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:10.822 17:41:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:10.822 17:41:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:10.822 17:41:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:10.822 17:41:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:10.822 17:41:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:10.822 17:41:31 -- nvmf/common.sh@294 -- # net_devs=() 00:16:10.822 17:41:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:10.822 17:41:31 -- nvmf/common.sh@295 -- # e810=() 00:16:10.822 17:41:31 -- nvmf/common.sh@295 -- # local -ga e810 00:16:10.822 17:41:31 -- nvmf/common.sh@296 -- # x722=() 00:16:10.822 17:41:31 -- nvmf/common.sh@296 -- # local -ga x722 00:16:10.822 17:41:31 -- nvmf/common.sh@297 -- # mlx=() 00:16:10.822 17:41:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:10.822 17:41:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.822 17:41:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:10.822 17:41:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:10.822 17:41:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:10.822 17:41:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.822 17:41:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:10.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:10.822 17:41:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.822 17:41:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:10.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:10.822 17:41:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:10.822 17:41:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:10.822 17:41:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.822 17:41:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.822 17:41:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.822 17:41:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.822 17:41:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:10.822 Found net devices under 0000:86:00.0: cvl_0_0 00:16:10.822 17:41:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.822 17:41:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.822 17:41:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.822 17:41:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.822 17:41:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.822 17:41:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:10.822 Found net devices under 0000:86:00.1: cvl_0_1 00:16:10.823 17:41:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.823 17:41:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:10.823 17:41:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:10.823 17:41:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:10.823 17:41:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:10.823 17:41:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:10.823 17:41:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.823 17:41:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.823 17:41:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.823 17:41:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:10.823 17:41:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.823 17:41:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.823 17:41:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:10.823 17:41:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.823 17:41:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.823 17:41:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:10.823 17:41:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:10.823 17:41:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.823 17:41:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.823 17:41:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.823 17:41:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.823 17:41:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:10.823 17:41:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.823 17:41:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.823 17:41:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.823 17:41:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:10.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:16:10.823 00:16:10.823 --- 10.0.0.2 ping statistics --- 00:16:10.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.823 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:10.823 17:41:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:16:10.823 00:16:10.823 --- 10.0.0.1 ping statistics --- 00:16:10.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.823 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:16:10.823 17:41:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.823 17:41:32 -- nvmf/common.sh@410 -- # return 0 00:16:10.823 17:41:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:10.823 17:41:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.823 17:41:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:10.823 17:41:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:10.823 17:41:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.823 17:41:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:10.823 17:41:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:10.823 17:41:32 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:10.823 17:41:32 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:10.823 only one NIC for nvmf test 00:16:10.823 17:41:32 -- target/multipath.sh@47 -- # nvmftestfini 00:16:10.823 17:41:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.823 17:41:32 -- nvmf/common.sh@116 -- # sync 00:16:10.823 17:41:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.823 17:41:32 -- nvmf/common.sh@119 -- # set +e 00:16:10.823 17:41:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.823 17:41:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.823 rmmod nvme_tcp 00:16:10.823 rmmod nvme_fabrics 00:16:10.823 rmmod nvme_keyring 00:16:10.823 17:41:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:10.823 17:41:32 -- nvmf/common.sh@123 -- # set -e 00:16:10.823 17:41:32 -- nvmf/common.sh@124 -- # return 0 00:16:10.823 17:41:32 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:10.823 17:41:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:10.823 17:41:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:10.823 17:41:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:10.823 17:41:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.823 17:41:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:10.823 17:41:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.823 17:41:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.823 17:41:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.769 17:41:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:12.769 17:41:34 -- target/multipath.sh@48 -- # exit 0 00:16:12.769 17:41:34 -- target/multipath.sh@1 -- # nvmftestfini 00:16:12.769 17:41:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.769 17:41:34 -- nvmf/common.sh@116 -- # sync 00:16:12.769 17:41:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.769 17:41:34 -- nvmf/common.sh@119 -- # set +e 00:16:12.769 17:41:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.769 17:41:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.769 17:41:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.769 17:41:34 -- nvmf/common.sh@123 -- # set -e 00:16:12.769 17:41:34 -- nvmf/common.sh@124 -- # return 0 00:16:12.769 17:41:34 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:12.769 17:41:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.769 17:41:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.769 17:41:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.769 17:41:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.769 17:41:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.770 17:41:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.770 17:41:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.770 17:41:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.770 17:41:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:12.770 00:16:12.770 real 0m7.599s 00:16:12.770 user 0m1.417s 00:16:12.770 sys 0m4.176s 00:16:12.770 17:41:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.770 17:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:12.770 ************************************ 00:16:12.770 END TEST nvmf_multipath 00:16:12.770 ************************************ 00:16:12.770 17:41:34 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.770 17:41:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:12.770 17:41:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.770 17:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:12.770 ************************************ 00:16:12.770 START TEST nvmf_zcopy 00:16:12.770 ************************************ 00:16:12.770 17:41:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.029 * Looking for test storage... 00:16:13.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.029 17:41:34 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.029 17:41:34 -- nvmf/common.sh@7 -- # uname -s 00:16:13.029 17:41:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.029 17:41:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.029 17:41:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.029 17:41:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.029 17:41:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.029 17:41:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.029 17:41:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.029 17:41:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.029 17:41:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.029 17:41:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.029 17:41:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.029 17:41:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.029 17:41:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.029 17:41:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.029 17:41:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.029 17:41:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.029 17:41:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.029 17:41:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.029 17:41:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.030 17:41:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.030 17:41:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.030 17:41:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.030 17:41:34 -- paths/export.sh@5 -- # export PATH 00:16:13.030 17:41:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.030 17:41:34 -- nvmf/common.sh@46 -- # : 0 00:16:13.030 17:41:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:13.030 17:41:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:13.030 17:41:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:13.030 17:41:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.030 17:41:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.030 17:41:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:13.030 17:41:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:13.030 17:41:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:13.030 17:41:34 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:13.030 17:41:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:13.030 17:41:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.030 17:41:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:13.030 17:41:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:13.030 17:41:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:13.030 17:41:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.030 17:41:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.030 17:41:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.030 17:41:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:13.030 17:41:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:13.030 17:41:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:13.030 17:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.306 17:41:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:18.306 17:41:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:18.307 17:41:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:18.307 17:41:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:18.307 17:41:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:18.307 17:41:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:18.307 17:41:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:18.307 17:41:39 -- nvmf/common.sh@294 -- # net_devs=() 00:16:18.307 17:41:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:18.307 17:41:39 -- nvmf/common.sh@295 -- # e810=() 00:16:18.307 17:41:39 -- nvmf/common.sh@295 -- # local -ga e810 00:16:18.307 17:41:39 -- nvmf/common.sh@296 -- # x722=() 00:16:18.307 17:41:39 -- nvmf/common.sh@296 -- # local -ga x722 00:16:18.307 17:41:39 -- nvmf/common.sh@297 -- # mlx=() 00:16:18.307 17:41:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:18.307 17:41:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.307 17:41:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:18.307 17:41:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:18.307 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:18.307 17:41:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:18.307 17:41:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:18.307 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:18.307 17:41:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:18.307 17:41:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.307 17:41:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.307 17:41:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:18.307 Found net devices under 0000:86:00.0: cvl_0_0 00:16:18.307 17:41:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:18.307 17:41:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.307 17:41:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.307 17:41:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:18.307 Found net devices under 0000:86:00.1: cvl_0_1 00:16:18.307 17:41:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:18.307 17:41:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:18.307 17:41:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.307 17:41:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.307 17:41:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:18.307 17:41:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.307 17:41:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.307 17:41:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:18.307 17:41:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.307 17:41:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.307 17:41:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:18.307 17:41:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:18.307 17:41:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.307 17:41:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.307 17:41:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.307 17:41:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.307 17:41:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:18.307 17:41:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.307 17:41:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.307 17:41:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.307 17:41:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:18.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:16:18.307 00:16:18.307 --- 10.0.0.2 ping statistics --- 00:16:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.307 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:18.307 17:41:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:16:18.307 00:16:18.307 --- 10.0.0.1 ping statistics --- 00:16:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.307 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:16:18.307 17:41:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.307 17:41:39 -- nvmf/common.sh@410 -- # return 0 00:16:18.307 17:41:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:18.307 17:41:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.307 17:41:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:18.307 17:41:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.307 17:41:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:18.307 17:41:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:18.307 17:41:39 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:18.307 17:41:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:18.307 17:41:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:18.307 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:16:18.307 17:41:39 -- nvmf/common.sh@469 -- # nvmfpid=598972 00:16:18.307 17:41:39 -- nvmf/common.sh@470 -- # waitforlisten 598972 00:16:18.307 17:41:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:18.307 17:41:39 -- common/autotest_common.sh@819 -- # '[' -z 598972 ']' 00:16:18.307 17:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.307 17:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.307 17:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.307 17:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.307 17:41:39 -- common/autotest_common.sh@10 -- # set +x 00:16:18.307 [2024-07-24 17:41:39.788349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:18.307 [2024-07-24 17:41:39.788392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.307 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.307 [2024-07-24 17:41:39.846511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.566 [2024-07-24 17:41:39.924106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:18.566 [2024-07-24 17:41:39.924212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.566 [2024-07-24 17:41:39.924223] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.566 [2024-07-24 17:41:39.924230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.566 [2024-07-24 17:41:39.924244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.134 17:41:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.134 17:41:40 -- common/autotest_common.sh@852 -- # return 0 00:16:19.134 17:41:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:19.134 17:41:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:19.134 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.134 17:41:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.134 17:41:40 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:19.134 17:41:40 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:19.134 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.134 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.134 [2024-07-24 17:41:40.611073] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.134 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.134 17:41:40 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:19.134 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.134 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.134 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.135 17:41:40 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.135 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.135 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 [2024-07-24 17:41:40.631212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.135 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.135 17:41:40 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.135 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.135 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.135 17:41:40 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:19.135 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.135 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 malloc0 00:16:19.135 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.135 17:41:40 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:19.135 17:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.135 17:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 17:41:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.135 17:41:40 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:19.135 17:41:40 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:19.135 17:41:40 -- nvmf/common.sh@520 -- # config=() 00:16:19.135 17:41:40 -- nvmf/common.sh@520 -- # local subsystem config 00:16:19.135 17:41:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:19.135 17:41:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:19.135 { 00:16:19.135 "params": { 00:16:19.135 "name": "Nvme$subsystem", 00:16:19.135 "trtype": "$TEST_TRANSPORT", 00:16:19.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.135 "adrfam": "ipv4", 00:16:19.135 "trsvcid": "$NVMF_PORT", 00:16:19.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.135 "hdgst": ${hdgst:-false}, 00:16:19.135 "ddgst": ${ddgst:-false} 00:16:19.135 }, 00:16:19.135 "method": "bdev_nvme_attach_controller" 00:16:19.135 } 00:16:19.135 EOF 00:16:19.135 )") 00:16:19.135 17:41:40 -- nvmf/common.sh@542 -- # cat 00:16:19.135 17:41:40 -- nvmf/common.sh@544 -- # jq . 00:16:19.135 17:41:40 -- nvmf/common.sh@545 -- # IFS=, 00:16:19.135 17:41:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:19.135 "params": { 00:16:19.135 "name": "Nvme1", 00:16:19.135 "trtype": "tcp", 00:16:19.135 "traddr": "10.0.0.2", 00:16:19.135 "adrfam": "ipv4", 00:16:19.135 "trsvcid": "4420", 00:16:19.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.135 "hdgst": false, 00:16:19.135 "ddgst": false 00:16:19.135 }, 00:16:19.135 "method": "bdev_nvme_attach_controller" 00:16:19.135 }' 00:16:19.135 [2024-07-24 17:41:40.707773] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:19.135 [2024-07-24 17:41:40.707819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599216 ] 00:16:19.135 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.393 [2024-07-24 17:41:40.761362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.393 [2024-07-24 17:41:40.832234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.652 Running I/O for 10 seconds... 00:16:29.637 00:16:29.637 Latency(us) 00:16:29.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.637 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:29.637 Verification LBA range: start 0x0 length 0x1000 00:16:29.637 Nvme1n1 : 10.01 12954.28 101.21 0.00 0.00 9857.65 997.29 37611.97 00:16:29.637 =================================================================================================================== 00:16:29.637 Total : 12954.28 101.21 0.00 0.00 9857.65 997.29 37611.97 00:16:29.897 17:41:51 -- target/zcopy.sh@39 -- # perfpid=601022 00:16:29.897 17:41:51 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:29.897 17:41:51 -- common/autotest_common.sh@10 -- # set +x 00:16:29.897 17:41:51 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:29.897 17:41:51 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:29.897 17:41:51 -- nvmf/common.sh@520 -- # config=() 00:16:29.897 17:41:51 -- nvmf/common.sh@520 -- # local subsystem config 00:16:29.897 17:41:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:29.897 17:41:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:29.897 { 00:16:29.897 "params": { 00:16:29.897 "name": "Nvme$subsystem", 00:16:29.897 "trtype": "$TEST_TRANSPORT", 00:16:29.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.897 "adrfam": "ipv4", 00:16:29.897 "trsvcid": "$NVMF_PORT", 00:16:29.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.897 "hdgst": ${hdgst:-false}, 00:16:29.897 "ddgst": ${ddgst:-false} 00:16:29.897 }, 00:16:29.897 "method": "bdev_nvme_attach_controller" 00:16:29.897 } 00:16:29.897 EOF 00:16:29.897 )") 00:16:29.897 17:41:51 -- nvmf/common.sh@542 -- # cat 00:16:29.897 [2024-07-24 17:41:51.278214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.278251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 17:41:51 -- nvmf/common.sh@544 -- # jq . 00:16:29.897 17:41:51 -- nvmf/common.sh@545 -- # IFS=, 00:16:29.897 17:41:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:29.897 "params": { 00:16:29.897 "name": "Nvme1", 00:16:29.897 "trtype": "tcp", 00:16:29.897 "traddr": "10.0.0.2", 00:16:29.897 "adrfam": "ipv4", 00:16:29.897 "trsvcid": "4420", 00:16:29.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.897 "hdgst": false, 00:16:29.897 "ddgst": false 00:16:29.897 }, 00:16:29.897 "method": "bdev_nvme_attach_controller" 00:16:29.897 }' 00:16:29.897 [2024-07-24 17:41:51.290207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.290219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 [2024-07-24 17:41:51.298224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.298235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 [2024-07-24 17:41:51.306246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.306255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 [2024-07-24 17:41:51.313468] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:29.897 [2024-07-24 17:41:51.313513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601022 ] 00:16:29.897 [2024-07-24 17:41:51.314268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.314278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 [2024-07-24 17:41:51.322290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.322299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 [2024-07-24 17:41:51.334322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.897 [2024-07-24 17:41:51.334332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.897 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.897 [2024-07-24 17:41:51.342342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.342352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.350367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.350376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.358389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.358399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.365885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.898 [2024-07-24 17:41:51.366410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.366419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.378445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.378457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.386461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.386470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.394483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.394493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.402508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.402520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.410532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.410550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.422561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.422570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.430580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.430590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.438601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.438610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.439170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.898 [2024-07-24 17:41:51.446621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.446631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.454654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.454673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.466683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.466695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.478712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.478722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.486730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.486740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.898 [2024-07-24 17:41:51.494778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.898 [2024-07-24 17:41:51.494802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.502781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.502791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.514806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.514814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.522840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.522857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.530855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.530867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.538878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.538890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.546900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.546911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.558929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.558938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.566950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.566960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.574972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.574981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.582991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.583000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.591017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.591029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.603054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.603067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.611070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.611078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.619090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.619098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.627107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.627116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.635130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.635142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.647167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.647180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.655186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.655195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.663209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.663218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.671229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.671238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.679251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.679260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.691288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.691300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.699307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.699315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.707329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.707337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.715351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.715360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.723373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.723381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.735408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.735418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.743437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.743453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 [2024-07-24 17:41:51.751456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.159 [2024-07-24 17:41:51.751466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.159 Running I/O for 5 seconds... 00:16:30.418 [2024-07-24 17:41:51.766234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.766253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.782396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.782414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.795926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.795946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.804907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.804926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.813506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.813524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.822282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.822305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.830451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.830468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.838780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.838797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.847464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.847482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.856561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.856579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.865272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.865290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.874223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.874240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.888249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.888267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.895284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.895302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.905386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.905403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.912453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.912471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.922473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.922490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.936576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.936595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.944924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.944941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.953763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.953781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.962430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.962447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.971124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.971141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.979974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.979992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.988607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.988624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:51.997354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:51.997375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:52.006404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:52.006421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.418 [2024-07-24 17:41:52.015211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.418 [2024-07-24 17:41:52.015230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.676 [2024-07-24 17:41:52.023819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.023837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.032256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.032274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.041072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.041089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.050306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.050323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.058739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.058756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.072362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.072380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.080873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.080890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.089844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.089861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.098983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.099001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.107293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.107311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.116107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.116125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.124591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.124608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.133391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.133409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.141905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.141922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.149369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.149386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.159143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.159161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.167569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.167593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.176828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.176846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.185243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.185260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.194022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.194040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.203688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.203706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.212169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.212187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.221321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.221339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.230646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.230664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.239800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.239817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.248954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.248973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.257746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.257764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.677 [2024-07-24 17:41:52.266763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.677 [2024-07-24 17:41:52.266780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.276207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.276225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.285306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.285323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.300328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.300347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.308994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.309011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.319316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.319334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.328295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.328313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.337405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.337422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.345994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.346011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.355316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.355334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.364354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.364371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.372695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.372713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.381395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.381412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.390399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.390418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.399328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.399347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.407839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.407858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.416121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.416139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.424366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.424385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.438484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.438502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.447288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.447306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.455607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.455625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.464137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.464155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.472460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.472479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.935 [2024-07-24 17:41:52.486328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.935 [2024-07-24 17:41:52.486346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.936 [2024-07-24 17:41:52.495476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.936 [2024-07-24 17:41:52.495495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.936 [2024-07-24 17:41:52.504215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.936 [2024-07-24 17:41:52.504233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.936 [2024-07-24 17:41:52.513324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.936 [2024-07-24 17:41:52.513342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.936 [2024-07-24 17:41:52.522268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.936 [2024-07-24 17:41:52.522286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.536578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.536599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.544978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.544996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.553836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.553854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.562754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.562772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.571833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.571851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.581211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.581229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.589471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.589489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.596948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.596966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.606239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.606258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.615717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.615736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.624399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.624417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.632063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.632081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.640845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.640864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.647954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.647973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.658582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.658601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.672430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.672449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.679155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.679174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.689118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.689136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.697830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.697849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.706481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.706499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.714898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.714916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.723470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.723487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.732832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.732850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.739870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.739887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.751621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.751640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.762336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.762355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.769642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.769660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.780182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.780200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.194 [2024-07-24 17:41:52.788774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.194 [2024-07-24 17:41:52.788792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.796749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.796767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.813345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.813362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.821625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.821642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.830136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.830154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.839792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.839810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.846941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.846959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.860340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.860358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.867579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.867600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.877170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.877187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.885880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.885898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.894720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.894739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.910591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.910609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.921290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.921308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.928993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.929010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.936374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.936390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.946711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.946729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.960407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.960425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.968709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.968727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.975857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.975874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.984733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.984750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:52.993249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:52.993265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:53.008347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:53.008366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:53.018089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:53.018106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.453 [2024-07-24 17:41:53.026354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.453 [2024-07-24 17:41:53.026371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.454 [2024-07-24 17:41:53.035662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.454 [2024-07-24 17:41:53.035679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.454 [2024-07-24 17:41:53.044341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.454 [2024-07-24 17:41:53.044358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.053580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.053606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.062125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.062143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.073984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.074000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.083207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.083224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.091009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.091026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.105589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.105607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.113208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.113226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.119735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.119752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.130441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.130459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.138840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.138857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.147542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.147559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.156624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.156641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.164055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.164073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.174319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.174338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.181710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.181729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.191932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.191951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.200836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.200853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.210142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.210159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.217768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.217785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.227325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.227346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.243603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.243621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.253563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.253615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.262269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.262287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.271191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.271208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.278946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.278963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.288311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.288328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.296655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.296673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.714 [2024-07-24 17:41:53.307872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.714 [2024-07-24 17:41:53.307889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.972 [2024-07-24 17:41:53.318396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.972 [2024-07-24 17:41:53.318414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.325971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.325988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.340476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.340494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.347549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.347567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.357471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.357488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.366056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.366073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.374859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.374876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.383741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.383758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.392344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.392361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.401239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.401256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.410132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.410154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.419108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.419126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.427977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.427994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.436413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.436430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.445231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.445248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.453702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.453719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.462928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.462945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.471459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.471476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.480479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.480497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.489102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.489120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.498388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.498405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.506856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.506874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.516072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.516090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.524636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.524654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.533687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.533704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.541134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.541151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.551002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.551020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.973 [2024-07-24 17:41:53.563033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.973 [2024-07-24 17:41:53.563058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.572849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.572868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.582318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.582339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.591080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.591098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.600071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.600089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.608992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.609011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.617428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.617446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.627243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.627261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.635987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.636004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.644338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.644355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.653280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.653297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.662220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.662237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.671757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.671774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.680710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.680728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.689554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.689571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.703850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.703868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.711203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.711221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.721601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.721619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.728819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.728837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.739295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.739313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.748242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.748261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.756941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.756958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.765596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.765614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.773982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.774000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.782864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.782883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.792078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.792112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.800325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.800343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.809354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.809373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.818065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.818083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.231 [2024-07-24 17:41:53.826988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.231 [2024-07-24 17:41:53.827005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.835602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.835622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.844162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.844181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.852670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.852687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.861478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.861496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.870510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.870528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.879662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.879680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.888469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.888487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.897395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.897413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.905923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.905942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.914933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.914951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.923877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.923896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.932349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.932367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.941166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.941184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.950317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.950335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.959239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.959257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.968385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.968403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.977160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.977178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.985712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.985731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:53.993310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:53.993329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:54.002913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:54.002932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:54.012445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:54.012463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:54.020917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:54.020935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:54.029989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:54.030007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.490 [2024-07-24 17:41:54.038940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.490 [2024-07-24 17:41:54.038957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.491 [2024-07-24 17:41:54.047319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.491 [2024-07-24 17:41:54.047337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.491 [2024-07-24 17:41:54.056385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.491 [2024-07-24 17:41:54.056402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.491 [2024-07-24 17:41:54.065174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.491 [2024-07-24 17:41:54.065192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.491 [2024-07-24 17:41:54.074116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.491 [2024-07-24 17:41:54.074134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.491 [2024-07-24 17:41:54.082969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.491 [2024-07-24 17:41:54.082987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.090641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.090660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.110173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.110192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.121696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.121713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.130113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.130131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.138682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.138701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.147971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.147989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.162151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.162169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.170680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.170697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.180179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.180198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.749 [2024-07-24 17:41:54.189289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.749 [2024-07-24 17:41:54.189306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.197755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.197773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.206761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.206780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.215581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.215598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.224826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.224843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.233716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.233733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.242041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.242067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.250951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.250969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.259209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.259226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.268320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.268341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.277368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.277386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.286574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.286592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.295592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.295609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.304620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.304638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.311195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.311212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.321653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.321672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.330578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.330596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.750 [2024-07-24 17:41:54.344204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.750 [2024-07-24 17:41:54.344222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.351061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.351079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.361182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.361200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.371242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.371260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.380135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.380153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.389537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.389555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.398272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.398289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.405632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.405649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.414955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.414973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.423923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.423941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.437797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.437814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.447367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.447389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.455505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.455522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.464663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.464681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.473151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.473169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.482283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.482300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.490980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.490997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.499445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.499462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.508643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.508660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.516360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.516378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.527023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.527040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.534027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.534050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.543525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.543543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.551958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.551976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.560834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.560852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.569997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.570015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.578808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.578825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.587019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.587036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.595920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.595937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.009 [2024-07-24 17:41:54.604854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.009 [2024-07-24 17:41:54.604871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.617326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.617348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.626605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.626623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.633464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.633482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.643546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.643564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.652363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.652381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.661323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.661341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.669659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.669676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.678877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.678895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.688033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.688057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.696577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.696594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.705538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.705556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.714350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.714368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.722937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.722955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.732002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.732020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.740635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.740652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.749737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.749754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.759049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.759066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.766228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.766245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.777971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.777988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.790253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.790273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.801775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.801797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.811134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.811152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.820148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.820165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.829331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.829348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.838232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.838250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.851892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.851911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.269 [2024-07-24 17:41:54.859357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.269 [2024-07-24 17:41:54.859374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.869704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.869722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.878302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.878319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.889785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.889803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.905801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.905819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.916514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.916532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.925339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.925356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.932076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.932094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.943356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.943373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.960093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.960111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.967395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.967412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.975111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.975128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.985343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.985364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:54.993758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:54.993776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.003223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.003241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.012041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.012063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.019177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.019193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.029282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.029299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.038162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.038180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.047268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.047286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.056256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.056274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.064582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.064600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.073377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.073395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.085147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.085164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.100234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.100251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.107559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.107576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.116958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.116976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.529 [2024-07-24 17:41:55.126091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.529 [2024-07-24 17:41:55.126112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.135051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.135069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.144108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.144125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.156829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.156846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.165538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.165556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.172368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.172385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.182424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.182442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.191373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.191391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.199412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.199430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.208486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.208505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.217312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.217330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.226077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.226095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.240216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.240235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.247170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.247187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.256974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.256992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.266131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.266149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.274631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.274648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.283497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.283516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.292232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.292250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.301251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.301270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.309432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.309451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.318393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.318412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.332365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.332384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.341169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.341187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.349470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.349488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.358735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.358752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.367537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.367555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.376252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.788 [2024-07-24 17:41:55.376270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.788 [2024-07-24 17:41:55.385016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.789 [2024-07-24 17:41:55.385034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.393556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.393575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.402555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.402573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.410965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.410984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.424980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.424999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.433623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.433641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.442416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.442434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.451418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.451436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.460522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.460540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.047 [2024-07-24 17:41:55.469593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.047 [2024-07-24 17:41:55.469609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.479100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.479117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.487187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.487205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.496523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.496540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.505195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.505213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.523094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.523113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.531785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.531803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.540693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.540711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.549605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.549622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.558001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.558019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.566452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.566470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.575301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.575319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.584421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.584438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.593142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.593160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.607587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.607605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.614973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.614990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.622139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.622156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.631268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.631285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.048 [2024-07-24 17:41:55.640323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.048 [2024-07-24 17:41:55.640341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.648875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.648894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.658004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.658021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.667594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.667612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.676311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.676328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.684591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.684617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.693511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.693528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.702267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.702285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.710892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.710909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.719753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.719770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.728176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.728194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.742271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.742288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.750283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.750301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.758604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.758622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.766911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.766927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.775801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.775819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.784584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.784601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.793547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.793564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.802436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.802454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.811031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.811054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.819489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.819506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.828442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.828460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.838013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.838032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.846842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.846860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.856007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.856029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.864484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.864501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.878820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.878839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.887439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.887458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.307 [2024-07-24 17:41:55.896306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.307 [2024-07-24 17:41:55.896323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.905324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.905343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.914509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.914526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.923468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.923485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.932010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.932026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.939183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.939200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.949175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.949193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.958334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.958352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.966935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.966952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.975599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.975616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.984496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.984513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:55.993743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:55.993759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:56.002670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.566 [2024-07-24 17:41:56.002688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.566 [2024-07-24 17:41:56.011534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.011551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.020232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.020249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.028438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.028460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.037699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.037716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.046734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.046752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.060063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.060080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.070027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.070049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.079143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.079161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.085859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.085876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.093861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.093878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.106665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.106683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.115266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.115284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.124328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.124345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.131874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.131892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.142682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.142699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.567 [2024-07-24 17:41:56.157421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.567 [2024-07-24 17:41:56.157439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.168740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.168759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.178530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.178547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.190075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.190092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.199422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.199440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.212226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.212244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.221071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.825 [2024-07-24 17:41:56.221092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.825 [2024-07-24 17:41:56.230244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.230262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.239156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.239180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.247364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.247382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.256278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.256295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.265292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.265309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.274098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.274115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.282885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.282902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.291641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.291658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.305547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.305565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.312189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.312207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.322082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.322100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.330862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.330879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.343344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.343361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.357134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.357152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.364734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.364752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.374597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.374615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.383090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.383107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.393753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.393770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.409953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.409975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.826 [2024-07-24 17:41:56.418810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.826 [2024-07-24 17:41:56.418830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-24 17:41:56.428340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-24 17:41:56.428359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-24 17:41:56.442698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-24 17:41:56.442717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.451508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.451525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.465741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.465759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.478039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.478062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.484868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.484886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.495318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.495335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.504448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.504466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.512070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.512088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.521890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.521909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.529493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.529510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.538993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.539010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.547454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.547471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.562030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.562053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.574140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.574158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.583142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.583159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.592012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.592031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.599856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.599875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.615830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.615849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.625466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.625485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.634143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.634162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.642829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.642847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.651332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.651351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.665996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.666014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.676665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.676682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.685658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.685677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.694503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.694522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.703334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.703352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.122 [2024-07-24 17:41:56.718076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.122 [2024-07-24 17:41:56.718096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.728367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.728386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.736896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.736915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.745530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.745548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.754097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.754115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.762802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.762820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 00:16:35.382 Latency(us) 00:16:35.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.382 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:35.382 Nvme1n1 : 5.00 16462.23 128.61 0.00 0.00 7768.81 1787.99 28151.99 00:16:35.382 =================================================================================================================== 00:16:35.382 Total : 16462.23 128.61 0.00 0.00 7768.81 1787.99 28151.99 00:16:35.382 [2024-07-24 17:41:56.772432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.772450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.777138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.777151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.785155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.785168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.793174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.793184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.805222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.805240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.813230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.813243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.821251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.821263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.829273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.829284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.837293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.837303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.849331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.849343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.857350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.857361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.865375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.865389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.873392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.873403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.881413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.881423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.893449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.893458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.901469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.901478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.909492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.909503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.917513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.917524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.925536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.925551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.937567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.937576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.945584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.945593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.953607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.953617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.961630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.961643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.382 [2024-07-24 17:41:56.969652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.382 [2024-07-24 17:41:56.969662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.642 [2024-07-24 17:41:56.981689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.642 [2024-07-24 17:41:56.981701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (601022) - No such process 00:16:35.642 17:41:56 -- target/zcopy.sh@49 -- # wait 601022 00:16:35.642 17:41:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.642 17:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.642 17:41:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.642 17:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.642 17:41:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:35.642 17:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.642 17:41:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.642 delay0 00:16:35.642 17:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.642 17:41:57 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:35.642 17:41:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.642 17:41:57 -- common/autotest_common.sh@10 -- # set +x 00:16:35.642 17:41:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.642 17:41:57 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:35.642 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.642 [2024-07-24 17:41:57.116276] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:42.211 Initializing NVMe Controllers 00:16:42.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.211 Initialization complete. Launching workers. 00:16:42.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 97 00:16:42.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 31 00:16:42.211 success 171, unsuccess 215, failed 0 00:16:42.211 17:42:03 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:42.211 17:42:03 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:42.211 17:42:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.211 17:42:03 -- nvmf/common.sh@116 -- # sync 00:16:42.211 17:42:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.211 17:42:03 -- nvmf/common.sh@119 -- # set +e 00:16:42.211 17:42:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.211 17:42:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.211 rmmod nvme_tcp 00:16:42.211 rmmod nvme_fabrics 00:16:42.211 rmmod nvme_keyring 00:16:42.211 17:42:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.211 17:42:03 -- nvmf/common.sh@123 -- # set -e 00:16:42.211 17:42:03 -- nvmf/common.sh@124 -- # return 0 00:16:42.211 17:42:03 -- nvmf/common.sh@477 -- # '[' -n 598972 ']' 00:16:42.211 17:42:03 -- nvmf/common.sh@478 -- # killprocess 598972 00:16:42.211 17:42:03 -- common/autotest_common.sh@926 -- # '[' -z 598972 ']' 00:16:42.211 17:42:03 -- common/autotest_common.sh@930 -- # kill -0 598972 00:16:42.211 17:42:03 -- common/autotest_common.sh@931 -- # uname 00:16:42.211 17:42:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.211 17:42:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 598972 00:16:42.211 17:42:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:42.211 17:42:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:42.211 17:42:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 598972' 00:16:42.211 killing process with pid 598972 00:16:42.211 17:42:03 -- common/autotest_common.sh@945 -- # kill 598972 00:16:42.211 17:42:03 -- common/autotest_common.sh@950 -- # wait 598972 00:16:42.211 17:42:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.211 17:42:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.211 17:42:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.211 17:42:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.211 17:42:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.211 17:42:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.211 17:42:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.211 17:42:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.122 17:42:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:44.122 00:16:44.122 real 0m31.362s 00:16:44.122 user 0m43.073s 00:16:44.122 sys 0m10.327s 00:16:44.122 17:42:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.122 17:42:05 -- common/autotest_common.sh@10 -- # set +x 00:16:44.122 ************************************ 00:16:44.122 END TEST nvmf_zcopy 00:16:44.122 ************************************ 00:16:44.419 17:42:05 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.419 17:42:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:44.419 17:42:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:44.419 17:42:05 -- common/autotest_common.sh@10 -- # set +x 00:16:44.419 ************************************ 00:16:44.419 START TEST nvmf_nmic 00:16:44.419 ************************************ 00:16:44.419 17:42:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.419 * Looking for test storage... 00:16:44.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.419 17:42:05 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.419 17:42:05 -- nvmf/common.sh@7 -- # uname -s 00:16:44.419 17:42:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.419 17:42:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.419 17:42:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.419 17:42:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.419 17:42:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.419 17:42:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.419 17:42:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.419 17:42:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.419 17:42:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.419 17:42:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.419 17:42:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.419 17:42:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.419 17:42:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.419 17:42:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.419 17:42:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.419 17:42:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.419 17:42:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.419 17:42:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.419 17:42:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.419 17:42:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.419 17:42:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.419 17:42:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.419 17:42:05 -- paths/export.sh@5 -- # export PATH 00:16:44.420 17:42:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.420 17:42:05 -- nvmf/common.sh@46 -- # : 0 00:16:44.420 17:42:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.420 17:42:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.420 17:42:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.420 17:42:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.420 17:42:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.420 17:42:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.420 17:42:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.420 17:42:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.420 17:42:05 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.420 17:42:05 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.420 17:42:05 -- target/nmic.sh@14 -- # nvmftestinit 00:16:44.420 17:42:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:44.420 17:42:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.420 17:42:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.420 17:42:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.420 17:42:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.420 17:42:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.420 17:42:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.420 17:42:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.420 17:42:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:44.420 17:42:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:44.420 17:42:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:44.420 17:42:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.696 17:42:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.696 17:42:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:49.696 17:42:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:49.696 17:42:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:49.696 17:42:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:49.696 17:42:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:49.696 17:42:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:49.696 17:42:10 -- nvmf/common.sh@294 -- # net_devs=() 00:16:49.696 17:42:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:49.696 17:42:10 -- nvmf/common.sh@295 -- # e810=() 00:16:49.696 17:42:10 -- nvmf/common.sh@295 -- # local -ga e810 00:16:49.696 17:42:10 -- nvmf/common.sh@296 -- # x722=() 00:16:49.696 17:42:10 -- nvmf/common.sh@296 -- # local -ga x722 00:16:49.696 17:42:10 -- nvmf/common.sh@297 -- # mlx=() 00:16:49.696 17:42:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:49.696 17:42:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.696 17:42:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.696 17:42:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:49.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:49.696 17:42:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.696 17:42:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:49.696 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:49.696 17:42:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.696 17:42:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.696 17:42:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.696 17:42:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:49.696 Found net devices under 0000:86:00.0: cvl_0_0 00:16:49.696 17:42:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.696 17:42:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.696 17:42:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.696 17:42:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:49.696 Found net devices under 0000:86:00.1: cvl_0_1 00:16:49.696 17:42:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:49.696 17:42:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:49.696 17:42:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.696 17:42:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.696 17:42:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:49.696 17:42:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.696 17:42:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.696 17:42:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:49.696 17:42:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.696 17:42:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.696 17:42:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:49.696 17:42:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:49.696 17:42:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.696 17:42:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.696 17:42:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.696 17:42:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.696 17:42:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:49.696 17:42:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.696 17:42:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.696 17:42:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.696 17:42:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:49.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:16:49.696 00:16:49.696 --- 10.0.0.2 ping statistics --- 00:16:49.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.696 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:49.696 17:42:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:16:49.696 00:16:49.696 --- 10.0.0.1 ping statistics --- 00:16:49.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.696 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:16:49.696 17:42:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.696 17:42:10 -- nvmf/common.sh@410 -- # return 0 00:16:49.696 17:42:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:49.696 17:42:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.696 17:42:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:49.696 17:42:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.696 17:42:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:49.696 17:42:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:49.696 17:42:10 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:49.696 17:42:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:49.696 17:42:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:49.696 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.696 17:42:10 -- nvmf/common.sh@469 -- # nvmfpid=606777 00:16:49.696 17:42:10 -- nvmf/common.sh@470 -- # waitforlisten 606777 00:16:49.696 17:42:10 -- common/autotest_common.sh@819 -- # '[' -z 606777 ']' 00:16:49.696 17:42:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.696 17:42:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.696 17:42:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.696 17:42:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.696 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.696 17:42:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:49.696 [2024-07-24 17:42:10.625455] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:49.697 [2024-07-24 17:42:10.625501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.697 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.697 [2024-07-24 17:42:10.684270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.697 [2024-07-24 17:42:10.764813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.697 [2024-07-24 17:42:10.764921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.697 [2024-07-24 17:42:10.764931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.697 [2024-07-24 17:42:10.764938] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.697 [2024-07-24 17:42:10.764972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.697 [2024-07-24 17:42:10.764990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.697 [2024-07-24 17:42:10.765080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.697 [2024-07-24 17:42:10.765081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.956 17:42:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.956 17:42:11 -- common/autotest_common.sh@852 -- # return 0 00:16:49.956 17:42:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.956 17:42:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 17:42:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.956 17:42:11 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 [2024-07-24 17:42:11.471317] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 Malloc0 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 [2024-07-24 17:42:11.522841] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:49.956 test case1: single bdev can't be used in multiple subsystems 00:16:49.956 17:42:11 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.956 17:42:11 -- target/nmic.sh@28 -- # nmic_status=0 00:16:49.956 17:42:11 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:49.956 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.956 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.956 [2024-07-24 17:42:11.550768] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:49.956 [2024-07-24 17:42:11.550787] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:49.957 [2024-07-24 17:42:11.550794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:50.216 request: 00:16:50.216 { 00:16:50.216 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:50.216 "namespace": { 00:16:50.216 "bdev_name": "Malloc0" 00:16:50.216 }, 00:16:50.216 "method": "nvmf_subsystem_add_ns", 00:16:50.216 "req_id": 1 00:16:50.216 } 00:16:50.216 Got JSON-RPC error response 00:16:50.216 response: 00:16:50.216 { 00:16:50.216 "code": -32602, 00:16:50.216 "message": "Invalid parameters" 00:16:50.216 } 00:16:50.216 17:42:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:50.216 17:42:11 -- target/nmic.sh@29 -- # nmic_status=1 00:16:50.216 17:42:11 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:50.216 17:42:11 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:50.216 Adding namespace failed - expected result. 00:16:50.216 17:42:11 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:50.216 test case2: host connect to nvmf target in multiple paths 00:16:50.216 17:42:11 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:50.216 17:42:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.216 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:50.216 [2024-07-24 17:42:11.562902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:50.216 17:42:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.216 17:42:11 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.153 17:42:12 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:52.532 17:42:13 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.533 17:42:13 -- common/autotest_common.sh@1177 -- # local i=0 00:16:52.533 17:42:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.533 17:42:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:52.533 17:42:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:54.438 17:42:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:54.438 17:42:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:54.438 17:42:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.438 17:42:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:54.438 17:42:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.438 17:42:15 -- common/autotest_common.sh@1187 -- # return 0 00:16:54.438 17:42:15 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:54.438 [global] 00:16:54.438 thread=1 00:16:54.438 invalidate=1 00:16:54.438 rw=write 00:16:54.438 time_based=1 00:16:54.438 runtime=1 00:16:54.438 ioengine=libaio 00:16:54.438 direct=1 00:16:54.438 bs=4096 00:16:54.438 iodepth=1 00:16:54.438 norandommap=0 00:16:54.438 numjobs=1 00:16:54.438 00:16:54.438 verify_dump=1 00:16:54.438 verify_backlog=512 00:16:54.438 verify_state_save=0 00:16:54.438 do_verify=1 00:16:54.438 verify=crc32c-intel 00:16:54.438 [job0] 00:16:54.438 filename=/dev/nvme0n1 00:16:54.438 Could not set queue depth (nvme0n1) 00:16:54.697 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.697 fio-3.35 00:16:54.697 Starting 1 thread 00:16:56.076 00:16:56.076 job0: (groupid=0, jobs=1): err= 0: pid=607869: Wed Jul 24 17:42:17 2024 00:16:56.076 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:16:56.076 slat (nsec): min=9356, max=24063, avg=21770.00, stdev=3157.95 00:16:56.076 clat (usec): min=41087, max=42184, avg=41891.12, stdev=265.63 00:16:56.076 lat (usec): min=41096, max=42208, avg=41912.89, stdev=267.63 00:16:56.076 clat percentiles (usec): 00:16:56.076 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:56.076 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:56.076 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:56.076 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.076 | 99.99th=[42206] 00:16:56.076 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:16:56.076 slat (nsec): min=8874, max=39692, avg=10175.06, stdev=2118.24 00:16:56.076 clat (usec): min=217, max=906, avg=267.24, stdev=90.42 00:16:56.076 lat (usec): min=227, max=916, avg=277.41, stdev=90.79 00:16:56.076 clat percentiles (usec): 00:16:56.076 | 1.00th=[ 221], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:16:56.076 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:16:56.076 | 70.00th=[ 243], 80.00th=[ 285], 90.00th=[ 375], 95.00th=[ 465], 00:16:56.076 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 906], 99.95th=[ 906], 00:16:56.076 | 99.99th=[ 906] 00:16:56.076 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.076 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.076 lat (usec) : 250=70.73%, 500=21.20%, 750=3.94%, 1000=0.19% 00:16:56.076 lat (msec) : 50=3.94% 00:16:56.076 cpu : usr=0.39%, sys=0.29%, ctx=533, majf=0, minf=2 00:16:56.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.076 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.076 00:16:56.076 Run status group 0 (all jobs): 00:16:56.076 READ: bw=82.0KiB/s (84.0kB/s), 82.0KiB/s-82.0KiB/s (84.0kB/s-84.0kB/s), io=84.0KiB (86.0kB), run=1024-1024msec 00:16:56.076 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:16:56.076 00:16:56.076 Disk stats (read/write): 00:16:56.076 nvme0n1: ios=68/512, merge=0/0, ticks=830/136, in_queue=966, util=93.39% 00:16:56.076 17:42:17 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:56.076 17:42:17 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.076 17:42:17 -- common/autotest_common.sh@1198 -- # local i=0 00:16:56.076 17:42:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:56.076 17:42:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.076 17:42:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:56.076 17:42:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.076 17:42:17 -- common/autotest_common.sh@1210 -- # return 0 00:16:56.076 17:42:17 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:56.076 17:42:17 -- target/nmic.sh@53 -- # nvmftestfini 00:16:56.076 17:42:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.076 17:42:17 -- nvmf/common.sh@116 -- # sync 00:16:56.076 17:42:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.076 17:42:17 -- nvmf/common.sh@119 -- # set +e 00:16:56.076 17:42:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.076 17:42:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.076 rmmod nvme_tcp 00:16:56.076 rmmod nvme_fabrics 00:16:56.076 rmmod nvme_keyring 00:16:56.076 17:42:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.076 17:42:17 -- nvmf/common.sh@123 -- # set -e 00:16:56.076 17:42:17 -- nvmf/common.sh@124 -- # return 0 00:16:56.076 17:42:17 -- nvmf/common.sh@477 -- # '[' -n 606777 ']' 00:16:56.076 17:42:17 -- nvmf/common.sh@478 -- # killprocess 606777 00:16:56.076 17:42:17 -- common/autotest_common.sh@926 -- # '[' -z 606777 ']' 00:16:56.076 17:42:17 -- common/autotest_common.sh@930 -- # kill -0 606777 00:16:56.076 17:42:17 -- common/autotest_common.sh@931 -- # uname 00:16:56.076 17:42:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.076 17:42:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 606777 00:16:56.076 17:42:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:56.076 17:42:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:56.076 17:42:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 606777' 00:16:56.076 killing process with pid 606777 00:16:56.076 17:42:17 -- common/autotest_common.sh@945 -- # kill 606777 00:16:56.076 17:42:17 -- common/autotest_common.sh@950 -- # wait 606777 00:16:56.337 17:42:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.337 17:42:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.337 17:42:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.337 17:42:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.337 17:42:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.337 17:42:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.337 17:42:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.337 17:42:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.879 17:42:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:58.879 00:16:58.879 real 0m14.171s 00:16:58.879 user 0m34.644s 00:16:58.879 sys 0m4.282s 00:16:58.879 17:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.879 17:42:19 -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 ************************************ 00:16:58.879 END TEST nvmf_nmic 00:16:58.879 ************************************ 00:16:58.879 17:42:19 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:58.879 17:42:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:58.879 17:42:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:58.879 17:42:19 -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 ************************************ 00:16:58.879 START TEST nvmf_fio_target 00:16:58.879 ************************************ 00:16:58.879 17:42:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:58.879 * Looking for test storage... 00:16:58.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.879 17:42:20 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.879 17:42:20 -- nvmf/common.sh@7 -- # uname -s 00:16:58.879 17:42:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.879 17:42:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.879 17:42:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.879 17:42:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.879 17:42:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.879 17:42:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.879 17:42:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.879 17:42:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.879 17:42:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.879 17:42:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.879 17:42:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.879 17:42:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.879 17:42:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.879 17:42:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.879 17:42:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.879 17:42:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.879 17:42:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.879 17:42:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.879 17:42:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.879 17:42:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.879 17:42:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.879 17:42:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.879 17:42:20 -- paths/export.sh@5 -- # export PATH 00:16:58.879 17:42:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.879 17:42:20 -- nvmf/common.sh@46 -- # : 0 00:16:58.879 17:42:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:58.879 17:42:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:58.879 17:42:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:58.879 17:42:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.880 17:42:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.880 17:42:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:58.880 17:42:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:58.880 17:42:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:58.880 17:42:20 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.880 17:42:20 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.880 17:42:20 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.880 17:42:20 -- target/fio.sh@16 -- # nvmftestinit 00:16:58.880 17:42:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:58.880 17:42:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.880 17:42:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:58.880 17:42:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:58.880 17:42:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:58.880 17:42:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.880 17:42:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.880 17:42:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.880 17:42:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:58.880 17:42:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:58.880 17:42:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:58.880 17:42:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.200 17:42:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:04.200 17:42:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:04.200 17:42:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:04.200 17:42:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:04.200 17:42:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:04.200 17:42:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:04.200 17:42:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:04.200 17:42:25 -- nvmf/common.sh@294 -- # net_devs=() 00:17:04.200 17:42:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:04.200 17:42:25 -- nvmf/common.sh@295 -- # e810=() 00:17:04.200 17:42:25 -- nvmf/common.sh@295 -- # local -ga e810 00:17:04.200 17:42:25 -- nvmf/common.sh@296 -- # x722=() 00:17:04.200 17:42:25 -- nvmf/common.sh@296 -- # local -ga x722 00:17:04.200 17:42:25 -- nvmf/common.sh@297 -- # mlx=() 00:17:04.200 17:42:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:04.200 17:42:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.200 17:42:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:04.200 17:42:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:04.200 17:42:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:04.200 17:42:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:04.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:04.200 17:42:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:04.200 17:42:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:04.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:04.200 17:42:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:04.200 17:42:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.200 17:42:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.200 17:42:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:04.200 Found net devices under 0000:86:00.0: cvl_0_0 00:17:04.200 17:42:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.200 17:42:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:04.200 17:42:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.200 17:42:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.200 17:42:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:04.200 Found net devices under 0000:86:00.1: cvl_0_1 00:17:04.200 17:42:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.200 17:42:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:04.200 17:42:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:04.200 17:42:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:04.200 17:42:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.200 17:42:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.200 17:42:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.200 17:42:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:04.200 17:42:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.200 17:42:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.200 17:42:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:04.200 17:42:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.200 17:42:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.200 17:42:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:04.200 17:42:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:04.200 17:42:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.200 17:42:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.200 17:42:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.200 17:42:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.200 17:42:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:04.200 17:42:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.200 17:42:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.200 17:42:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.200 17:42:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:04.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:17:04.200 00:17:04.200 --- 10.0.0.2 ping statistics --- 00:17:04.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.200 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:04.200 17:42:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:17:04.200 00:17:04.200 --- 10.0.0.1 ping statistics --- 00:17:04.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.200 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:04.201 17:42:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.201 17:42:25 -- nvmf/common.sh@410 -- # return 0 00:17:04.201 17:42:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:04.201 17:42:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.201 17:42:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:04.201 17:42:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:04.201 17:42:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.201 17:42:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:04.201 17:42:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:04.201 17:42:25 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:04.201 17:42:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.201 17:42:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:04.201 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:17:04.201 17:42:25 -- nvmf/common.sh@469 -- # nvmfpid=611642 00:17:04.201 17:42:25 -- nvmf/common.sh@470 -- # waitforlisten 611642 00:17:04.201 17:42:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.201 17:42:25 -- common/autotest_common.sh@819 -- # '[' -z 611642 ']' 00:17:04.201 17:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.201 17:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.201 17:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.201 17:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.201 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:17:04.201 [2024-07-24 17:42:25.455029] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:04.201 [2024-07-24 17:42:25.455076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.201 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.201 [2024-07-24 17:42:25.513184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.201 [2024-07-24 17:42:25.591388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.201 [2024-07-24 17:42:25.591496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.201 [2024-07-24 17:42:25.591503] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.201 [2024-07-24 17:42:25.591509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.201 [2024-07-24 17:42:25.591548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.201 [2024-07-24 17:42:25.591567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.201 [2024-07-24 17:42:25.591671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.201 [2024-07-24 17:42:25.591673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.770 17:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.770 17:42:26 -- common/autotest_common.sh@852 -- # return 0 00:17:04.770 17:42:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:04.770 17:42:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:04.770 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.770 17:42:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.770 17:42:26 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.029 [2024-07-24 17:42:26.452780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.029 17:42:26 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.289 17:42:26 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:05.289 17:42:26 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.289 17:42:26 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:05.289 17:42:26 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.548 17:42:27 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:05.548 17:42:27 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.807 17:42:27 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:05.807 17:42:27 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:06.066 17:42:27 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.066 17:42:27 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:06.066 17:42:27 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.326 17:42:27 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:06.326 17:42:27 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.586 17:42:28 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:06.586 17:42:28 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:06.586 17:42:28 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.846 17:42:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:06.846 17:42:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.106 17:42:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:07.106 17:42:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.365 17:42:28 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.365 [2024-07-24 17:42:28.858514] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.365 17:42:28 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:07.625 17:42:29 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:07.884 17:42:29 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.822 17:42:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:08.822 17:42:30 -- common/autotest_common.sh@1177 -- # local i=0 00:17:08.822 17:42:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.822 17:42:30 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:17:08.822 17:42:30 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:17:08.822 17:42:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:11.358 17:42:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:11.358 17:42:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:11.358 17:42:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.358 17:42:32 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:17:11.358 17:42:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.358 17:42:32 -- common/autotest_common.sh@1187 -- # return 0 00:17:11.358 17:42:32 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:11.358 [global] 00:17:11.358 thread=1 00:17:11.358 invalidate=1 00:17:11.358 rw=write 00:17:11.358 time_based=1 00:17:11.358 runtime=1 00:17:11.358 ioengine=libaio 00:17:11.358 direct=1 00:17:11.358 bs=4096 00:17:11.358 iodepth=1 00:17:11.358 norandommap=0 00:17:11.358 numjobs=1 00:17:11.358 00:17:11.358 verify_dump=1 00:17:11.358 verify_backlog=512 00:17:11.358 verify_state_save=0 00:17:11.358 do_verify=1 00:17:11.358 verify=crc32c-intel 00:17:11.358 [job0] 00:17:11.358 filename=/dev/nvme0n1 00:17:11.358 [job1] 00:17:11.358 filename=/dev/nvme0n2 00:17:11.358 [job2] 00:17:11.358 filename=/dev/nvme0n3 00:17:11.358 [job3] 00:17:11.358 filename=/dev/nvme0n4 00:17:11.358 Could not set queue depth (nvme0n1) 00:17:11.358 Could not set queue depth (nvme0n2) 00:17:11.358 Could not set queue depth (nvme0n3) 00:17:11.358 Could not set queue depth (nvme0n4) 00:17:11.358 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.358 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.358 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.358 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.358 fio-3.35 00:17:11.358 Starting 4 threads 00:17:12.750 00:17:12.751 job0: (groupid=0, jobs=1): err= 0: pid=613013: Wed Jul 24 17:42:34 2024 00:17:12.751 read: IOPS=18, BW=72.9KiB/s (74.7kB/s)(76.0KiB/1042msec) 00:17:12.751 slat (nsec): min=11159, max=24742, avg=19567.32, stdev=4664.13 00:17:12.751 clat (usec): min=40953, max=42074, avg=41777.03, stdev=400.95 00:17:12.751 lat (usec): min=40966, max=42099, avg=41796.60, stdev=402.50 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:12.751 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:12.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:12.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.751 | 99.99th=[42206] 00:17:12.751 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:17:12.751 slat (usec): min=10, max=45128, avg=101.39, stdev=1993.82 00:17:12.751 clat (usec): min=239, max=3148, avg=379.46, stdev=176.36 00:17:12.751 lat (usec): min=250, max=45839, avg=480.85, stdev=2016.25 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:17:12.751 | 30.00th=[ 285], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 367], 00:17:12.751 | 70.00th=[ 396], 80.00th=[ 465], 90.00th=[ 578], 95.00th=[ 594], 00:17:12.751 | 99.00th=[ 799], 99.50th=[ 1037], 99.90th=[ 3163], 99.95th=[ 3163], 00:17:12.751 | 99.99th=[ 3163] 00:17:12.751 bw ( KiB/s): min= 4096, max= 4096, per=41.68%, avg=4096.00, stdev= 0.00, samples=1 00:17:12.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:12.751 lat (usec) : 250=3.95%, 500=76.27%, 750=14.88%, 1000=0.75% 00:17:12.751 lat (msec) : 2=0.38%, 4=0.19%, 50=3.58% 00:17:12.751 cpu : usr=0.29%, sys=1.15%, ctx=534, majf=0, minf=1 00:17:12.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.751 job1: (groupid=0, jobs=1): err= 0: pid=613014: Wed Jul 24 17:42:34 2024 00:17:12.751 read: IOPS=18, BW=74.4KiB/s (76.2kB/s)(76.0KiB/1021msec) 00:17:12.751 slat (nsec): min=10523, max=23319, avg=21028.79, stdev=2873.07 00:17:12.751 clat (usec): min=41022, max=42101, avg=41871.12, stdev=293.34 00:17:12.751 lat (usec): min=41045, max=42122, avg=41892.15, stdev=294.63 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:12.751 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:12.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:12.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.751 | 99.99th=[42206] 00:17:12.751 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:12.751 slat (nsec): min=3303, max=24173, avg=8047.54, stdev=3901.79 00:17:12.751 clat (usec): min=259, max=1450, avg=427.40, stdev=160.71 00:17:12.751 lat (usec): min=267, max=1454, avg=435.45, stdev=159.61 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 343], 00:17:12.751 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 400], 00:17:12.751 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 545], 95.00th=[ 816], 00:17:12.751 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1450], 99.95th=[ 1450], 00:17:12.751 | 99.99th=[ 1450] 00:17:12.751 bw ( KiB/s): min= 4087, max= 4087, per=41.59%, avg=4087.00, stdev= 0.00, samples=1 00:17:12.751 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:12.751 lat (usec) : 500=84.75%, 750=6.40%, 1000=2.64% 00:17:12.751 lat (msec) : 2=2.64%, 50=3.58% 00:17:12.751 cpu : usr=0.29%, sys=0.59%, ctx=534, majf=0, minf=1 00:17:12.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.751 job2: (groupid=0, jobs=1): err= 0: pid=613015: Wed Jul 24 17:42:34 2024 00:17:12.751 read: IOPS=18, BW=74.0KiB/s (75.8kB/s)(76.0KiB/1027msec) 00:17:12.751 slat (nsec): min=10345, max=28557, avg=21816.05, stdev=3199.27 00:17:12.751 clat (usec): min=41131, max=42084, avg=41884.83, stdev=257.89 00:17:12.751 lat (usec): min=41153, max=42106, avg=41906.64, stdev=259.78 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:12.751 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:12.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:12.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.751 | 99.99th=[42206] 00:17:12.751 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:12.751 slat (usec): min=11, max=1355, avg=15.54, stdev=59.37 00:17:12.751 clat (usec): min=216, max=1707, avg=430.30, stdev=181.38 00:17:12.751 lat (usec): min=228, max=1807, avg=445.84, stdev=192.07 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[ 227], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 355], 00:17:12.751 | 30.00th=[ 367], 40.00th=[ 379], 50.00th=[ 396], 60.00th=[ 404], 00:17:12.751 | 70.00th=[ 416], 80.00th=[ 429], 90.00th=[ 490], 95.00th=[ 766], 00:17:12.751 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1713], 99.95th=[ 1713], 00:17:12.751 | 99.99th=[ 1713] 00:17:12.751 bw ( KiB/s): min= 4096, max= 4096, per=41.68%, avg=4096.00, stdev= 0.00, samples=1 00:17:12.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:12.751 lat (usec) : 250=4.33%, 500=82.67%, 750=3.39%, 1000=2.82% 00:17:12.751 lat (msec) : 2=3.20%, 50=3.58% 00:17:12.751 cpu : usr=0.49%, sys=0.97%, ctx=533, majf=0, minf=1 00:17:12.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.751 job3: (groupid=0, jobs=1): err= 0: pid=613016: Wed Jul 24 17:42:34 2024 00:17:12.751 read: IOPS=594, BW=2378KiB/s (2435kB/s)(2404KiB/1011msec) 00:17:12.751 slat (nsec): min=2997, max=42864, avg=8812.51, stdev=2974.06 00:17:12.751 clat (usec): min=402, max=42044, avg=1210.48, stdev=5301.96 00:17:12.751 lat (usec): min=405, max=42067, avg=1219.29, stdev=5303.65 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[ 416], 5.00th=[ 429], 10.00th=[ 441], 20.00th=[ 457], 00:17:12.751 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 515], 00:17:12.751 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 783], 00:17:12.751 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.751 | 99.99th=[42206] 00:17:12.751 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:17:12.751 slat (nsec): min=3586, max=77223, avg=10358.64, stdev=5463.26 00:17:12.751 clat (usec): min=198, max=733, avg=255.87, stdev=71.81 00:17:12.751 lat (usec): min=202, max=745, avg=266.22, stdev=73.76 00:17:12.751 clat percentiles (usec): 00:17:12.751 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:17:12.751 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 239], 00:17:12.751 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 429], 00:17:12.751 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 660], 99.95th=[ 734], 00:17:12.751 | 99.99th=[ 734] 00:17:12.751 bw ( KiB/s): min= 1704, max= 6488, per=41.68%, avg=4096.00, stdev=3382.80, samples=2 00:17:12.751 iops : min= 426, max= 1622, avg=1024.00, stdev=845.70, samples=2 00:17:12.751 lat (usec) : 250=43.08%, 500=36.43%, 750=18.58%, 1000=1.17% 00:17:12.751 lat (msec) : 2=0.12%, 50=0.62% 00:17:12.751 cpu : usr=1.49%, sys=2.08%, ctx=1626, majf=0, minf=2 00:17:12.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.751 issued rwts: total=601,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.751 00:17:12.751 Run status group 0 (all jobs): 00:17:12.751 READ: bw=2526KiB/s (2587kB/s), 72.9KiB/s-2378KiB/s (74.7kB/s-2435kB/s), io=2632KiB (2695kB), run=1011-1042msec 00:17:12.751 WRITE: bw=9827KiB/s (10.1MB/s), 1965KiB/s-4051KiB/s (2013kB/s-4149kB/s), io=10.0MiB (10.5MB), run=1011-1042msec 00:17:12.751 00:17:12.751 Disk stats (read/write): 00:17:12.751 nvme0n1: ios=63/512, merge=0/0, ticks=886/194, in_queue=1080, util=86.87% 00:17:12.751 nvme0n2: ios=37/512, merge=0/0, ticks=1512/211, in_queue=1723, util=89.92% 00:17:12.751 nvme0n3: ios=71/512, merge=0/0, ticks=769/214, in_queue=983, util=93.54% 00:17:12.751 nvme0n4: ios=654/1024, merge=0/0, ticks=777/254, in_queue=1031, util=94.44% 00:17:12.751 17:42:34 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:12.751 [global] 00:17:12.751 thread=1 00:17:12.751 invalidate=1 00:17:12.751 rw=randwrite 00:17:12.751 time_based=1 00:17:12.751 runtime=1 00:17:12.751 ioengine=libaio 00:17:12.751 direct=1 00:17:12.751 bs=4096 00:17:12.751 iodepth=1 00:17:12.751 norandommap=0 00:17:12.751 numjobs=1 00:17:12.751 00:17:12.751 verify_dump=1 00:17:12.751 verify_backlog=512 00:17:12.751 verify_state_save=0 00:17:12.751 do_verify=1 00:17:12.751 verify=crc32c-intel 00:17:12.751 [job0] 00:17:12.751 filename=/dev/nvme0n1 00:17:12.751 [job1] 00:17:12.751 filename=/dev/nvme0n2 00:17:12.751 [job2] 00:17:12.751 filename=/dev/nvme0n3 00:17:12.751 [job3] 00:17:12.751 filename=/dev/nvme0n4 00:17:12.751 Could not set queue depth (nvme0n1) 00:17:12.752 Could not set queue depth (nvme0n2) 00:17:12.752 Could not set queue depth (nvme0n3) 00:17:12.752 Could not set queue depth (nvme0n4) 00:17:13.009 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.009 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.009 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.009 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.009 fio-3.35 00:17:13.009 Starting 4 threads 00:17:14.381 00:17:14.381 job0: (groupid=0, jobs=1): err= 0: pid=613394: Wed Jul 24 17:42:35 2024 00:17:14.381 read: IOPS=19, BW=78.1KiB/s (80.0kB/s)(80.0KiB/1024msec) 00:17:14.381 slat (nsec): min=9106, max=23582, avg=22213.00, stdev=3134.57 00:17:14.381 clat (usec): min=41113, max=43092, avg=41983.04, stdev=337.66 00:17:14.381 lat (usec): min=41122, max=43115, avg=42005.25, stdev=339.55 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:14.381 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.381 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.381 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:14.381 | 99.99th=[43254] 00:17:14.381 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:14.381 slat (nsec): min=9103, max=40606, avg=10761.37, stdev=2466.61 00:17:14.381 clat (usec): min=215, max=901, avg=346.07, stdev=104.80 00:17:14.381 lat (usec): min=225, max=942, avg=356.83, stdev=105.46 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 243], 00:17:14.381 | 30.00th=[ 273], 40.00th=[ 297], 50.00th=[ 334], 60.00th=[ 351], 00:17:14.381 | 70.00th=[ 379], 80.00th=[ 433], 90.00th=[ 494], 95.00th=[ 545], 00:17:14.381 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 906], 99.95th=[ 906], 00:17:14.381 | 99.99th=[ 906] 00:17:14.381 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.381 lat (usec) : 250=23.12%, 500=65.79%, 750=7.14%, 1000=0.19% 00:17:14.381 lat (msec) : 50=3.76% 00:17:14.381 cpu : usr=0.88%, sys=0.00%, ctx=534, majf=0, minf=1 00:17:14.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.381 job1: (groupid=0, jobs=1): err= 0: pid=613395: Wed Jul 24 17:42:35 2024 00:17:14.381 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:17:14.381 slat (nsec): min=6245, max=20805, avg=19534.00, stdev=3184.52 00:17:14.381 clat (usec): min=41042, max=42061, avg=41883.47, stdev=277.97 00:17:14.381 lat (usec): min=41058, max=42081, avg=41903.00, stdev=280.69 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:14.381 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.381 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.381 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.381 | 99.99th=[42206] 00:17:14.381 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:14.381 slat (nsec): min=5989, max=37596, avg=7937.49, stdev=2474.52 00:17:14.381 clat (usec): min=208, max=741, avg=270.92, stdev=90.14 00:17:14.381 lat (usec): min=215, max=762, avg=278.86, stdev=92.02 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:17:14.381 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:17:14.381 | 70.00th=[ 260], 80.00th=[ 302], 90.00th=[ 396], 95.00th=[ 461], 00:17:14.381 | 99.00th=[ 594], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 742], 00:17:14.381 | 99.99th=[ 742] 00:17:14.381 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.381 lat (usec) : 250=62.85%, 500=29.46%, 750=3.75% 00:17:14.381 lat (msec) : 50=3.94% 00:17:14.381 cpu : usr=0.00%, sys=0.59%, ctx=533, majf=0, minf=1 00:17:14.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.381 job2: (groupid=0, jobs=1): err= 0: pid=613397: Wed Jul 24 17:42:35 2024 00:17:14.381 read: IOPS=19, BW=77.1KiB/s (79.0kB/s)(80.0KiB/1037msec) 00:17:14.381 slat (nsec): min=10417, max=26300, avg=22134.05, stdev=2941.87 00:17:14.381 clat (usec): min=41567, max=42073, avg=41937.07, stdev=111.18 00:17:14.381 lat (usec): min=41578, max=42097, avg=41959.21, stdev=113.25 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:14.381 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.381 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.381 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.381 | 99.99th=[42206] 00:17:14.381 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:14.381 slat (nsec): min=11059, max=42571, avg=13142.79, stdev=3089.28 00:17:14.381 clat (usec): min=241, max=2396, avg=369.66, stdev=148.44 00:17:14.381 lat (usec): min=253, max=2410, avg=382.80, stdev=149.22 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 277], 00:17:14.381 | 30.00th=[ 285], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 355], 00:17:14.381 | 70.00th=[ 400], 80.00th=[ 453], 90.00th=[ 570], 95.00th=[ 578], 00:17:14.381 | 99.00th=[ 832], 99.50th=[ 1057], 99.90th=[ 2409], 99.95th=[ 2409], 00:17:14.381 | 99.99th=[ 2409] 00:17:14.381 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.381 lat (usec) : 250=2.07%, 500=82.33%, 750=10.53%, 1000=0.75% 00:17:14.381 lat (msec) : 2=0.38%, 4=0.19%, 50=3.76% 00:17:14.381 cpu : usr=0.39%, sys=0.97%, ctx=533, majf=0, minf=1 00:17:14.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.381 job3: (groupid=0, jobs=1): err= 0: pid=613398: Wed Jul 24 17:42:35 2024 00:17:14.381 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:17:14.381 slat (nsec): min=9231, max=25006, avg=23067.27, stdev=3152.04 00:17:14.381 clat (usec): min=1182, max=42071, avg=38231.62, stdev=11980.21 00:17:14.381 lat (usec): min=1207, max=42095, avg=38254.69, stdev=11979.62 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[ 1188], 5.00th=[ 1254], 10.00th=[41681], 20.00th=[41681], 00:17:14.381 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:14.381 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.381 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.381 | 99.99th=[42206] 00:17:14.381 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:14.381 slat (nsec): min=3988, max=36590, avg=10630.08, stdev=2563.21 00:17:14.381 clat (usec): min=215, max=2295, avg=337.15, stdev=160.84 00:17:14.381 lat (usec): min=225, max=2317, avg=347.78, stdev=160.98 00:17:14.381 clat percentiles (usec): 00:17:14.381 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:17:14.381 | 30.00th=[ 247], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 306], 00:17:14.381 | 70.00th=[ 363], 80.00th=[ 457], 90.00th=[ 545], 95.00th=[ 586], 00:17:14.381 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 2311], 99.95th=[ 2311], 00:17:14.381 | 99.99th=[ 2311] 00:17:14.381 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.381 lat (usec) : 250=30.71%, 500=53.18%, 750=9.36%, 1000=2.25% 00:17:14.381 lat (msec) : 2=0.56%, 4=0.19%, 50=3.75% 00:17:14.381 cpu : usr=0.59%, sys=0.20%, ctx=535, majf=0, minf=2 00:17:14.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.381 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.381 00:17:14.381 Run status group 0 (all jobs): 00:17:14.381 READ: bw=320KiB/s (328kB/s), 77.1KiB/s-86.2KiB/s (79.0kB/s-88.3kB/s), io=332KiB (340kB), run=1021-1037msec 00:17:14.381 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2006KiB/s (2022kB/s-2054kB/s), io=8192KiB (8389kB), run=1021-1037msec 00:17:14.381 00:17:14.381 Disk stats (read/write): 00:17:14.381 nvme0n1: ios=39/512, merge=0/0, ticks=1639/179, in_queue=1818, util=97.09% 00:17:14.381 nvme0n2: ios=49/512, merge=0/0, ticks=715/136, in_queue=851, util=88.43% 00:17:14.381 nvme0n3: ios=38/512, merge=0/0, ticks=1598/179, in_queue=1777, util=96.98% 00:17:14.381 nvme0n4: ios=59/512, merge=0/0, ticks=916/172, in_queue=1088, util=98.01% 00:17:14.381 17:42:35 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:14.381 [global] 00:17:14.381 thread=1 00:17:14.381 invalidate=1 00:17:14.381 rw=write 00:17:14.381 time_based=1 00:17:14.381 runtime=1 00:17:14.381 ioengine=libaio 00:17:14.381 direct=1 00:17:14.381 bs=4096 00:17:14.381 iodepth=128 00:17:14.381 norandommap=0 00:17:14.381 numjobs=1 00:17:14.381 00:17:14.381 verify_dump=1 00:17:14.381 verify_backlog=512 00:17:14.381 verify_state_save=0 00:17:14.381 do_verify=1 00:17:14.382 verify=crc32c-intel 00:17:14.382 [job0] 00:17:14.382 filename=/dev/nvme0n1 00:17:14.382 [job1] 00:17:14.382 filename=/dev/nvme0n2 00:17:14.382 [job2] 00:17:14.382 filename=/dev/nvme0n3 00:17:14.382 [job3] 00:17:14.382 filename=/dev/nvme0n4 00:17:14.382 Could not set queue depth (nvme0n1) 00:17:14.382 Could not set queue depth (nvme0n2) 00:17:14.382 Could not set queue depth (nvme0n3) 00:17:14.382 Could not set queue depth (nvme0n4) 00:17:14.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:14.382 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:14.382 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:14.382 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:14.382 fio-3.35 00:17:14.382 Starting 4 threads 00:17:15.758 00:17:15.758 job0: (groupid=0, jobs=1): err= 0: pid=613775: Wed Jul 24 17:42:37 2024 00:17:15.758 read: IOPS=4999, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:17:15.758 slat (nsec): min=1054, max=7326.6k, avg=67085.03, stdev=389382.12 00:17:15.758 clat (usec): min=1276, max=24470, avg=8619.33, stdev=2636.42 00:17:15.758 lat (usec): min=1868, max=24475, avg=8686.42, stdev=2660.66 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 3392], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7046], 00:17:15.758 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8356], 00:17:15.758 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[12125], 95.00th=[13960], 00:17:15.758 | 99.00th=[17433], 99.50th=[22414], 99.90th=[24511], 99.95th=[24511], 00:17:15.758 | 99.99th=[24511] 00:17:15.758 write: IOPS=5188, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1003msec); 0 zone resets 00:17:15.758 slat (nsec): min=1917, max=32146k, avg=106593.02, stdev=578812.81 00:17:15.758 clat (usec): min=1086, max=67846, avg=15277.26, stdev=7062.84 00:17:15.758 lat (usec): min=1095, max=67855, avg=15383.85, stdev=7103.75 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 3621], 5.00th=[ 5014], 10.00th=[ 7177], 20.00th=[ 8979], 00:17:15.758 | 30.00th=[11338], 40.00th=[12780], 50.00th=[14746], 60.00th=[16909], 00:17:15.758 | 70.00th=[19792], 80.00th=[20841], 90.00th=[23200], 95.00th=[23725], 00:17:15.758 | 99.00th=[25822], 99.50th=[51119], 99.90th=[63177], 99.95th=[63177], 00:17:15.758 | 99.99th=[67634] 00:17:15.758 bw ( KiB/s): min=18576, max=23056, per=36.78%, avg=20816.00, stdev=3167.84, samples=2 00:17:15.758 iops : min= 4644, max= 5764, avg=5204.00, stdev=791.96, samples=2 00:17:15.758 lat (msec) : 2=0.11%, 4=1.50%, 10=49.93%, 20=34.07%, 50=13.99% 00:17:15.758 lat (msec) : 100=0.41% 00:17:15.758 cpu : usr=3.09%, sys=3.49%, ctx=1180, majf=0, minf=1 00:17:15.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.758 issued rwts: total=5014,5204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.758 job1: (groupid=0, jobs=1): err= 0: pid=613776: Wed Jul 24 17:42:37 2024 00:17:15.758 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:17:15.758 slat (nsec): min=1053, max=25791k, avg=208388.46, stdev=1524301.59 00:17:15.758 clat (usec): min=7163, max=82499, avg=27784.23, stdev=15198.22 00:17:15.758 lat (usec): min=8082, max=82507, avg=27992.62, stdev=15319.64 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[13435], 00:17:15.758 | 30.00th=[15533], 40.00th=[20841], 50.00th=[25297], 60.00th=[28705], 00:17:15.758 | 70.00th=[34341], 80.00th=[41157], 90.00th=[49546], 95.00th=[53740], 00:17:15.758 | 99.00th=[65799], 99.50th=[65799], 99.90th=[73925], 99.95th=[73925], 00:17:15.758 | 99.99th=[82314] 00:17:15.758 write: IOPS=2371, BW=9486KiB/s (9713kB/s)(9552KiB/1007msec); 0 zone resets 00:17:15.758 slat (usec): min=2, max=28874, avg=207.83, stdev=1114.73 00:17:15.758 clat (msec): min=5, max=109, avg=29.27, stdev=19.68 00:17:15.758 lat (msec): min=5, max=109, avg=29.48, stdev=19.80 00:17:15.758 clat percentiles (msec): 00:17:15.758 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 14], 20.00th=[ 16], 00:17:15.758 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 26], 00:17:15.758 | 70.00th=[ 30], 80.00th=[ 39], 90.00th=[ 63], 95.00th=[ 71], 00:17:15.758 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:17:15.758 | 99.99th=[ 110] 00:17:15.758 bw ( KiB/s): min= 5792, max=12288, per=15.97%, avg=9040.00, stdev=4593.37, samples=2 00:17:15.758 iops : min= 1448, max= 3072, avg=2260.00, stdev=1148.34, samples=2 00:17:15.758 lat (msec) : 10=3.47%, 20=33.50%, 50=52.14%, 100=10.08%, 250=0.81% 00:17:15.758 cpu : usr=0.99%, sys=2.19%, ctx=362, majf=0, minf=1 00:17:15.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.758 issued rwts: total=2048,2388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.758 job2: (groupid=0, jobs=1): err= 0: pid=613777: Wed Jul 24 17:42:37 2024 00:17:15.758 read: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:15.758 slat (nsec): min=1445, max=31110k, avg=128833.95, stdev=978502.18 00:17:15.758 clat (usec): min=5170, max=78776, avg=17921.79, stdev=10312.33 00:17:15.758 lat (usec): min=6723, max=78782, avg=18050.62, stdev=10379.27 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11600], 00:17:15.758 | 30.00th=[13042], 40.00th=[14484], 50.00th=[15664], 60.00th=[16909], 00:17:15.758 | 70.00th=[18744], 80.00th=[20841], 90.00th=[23462], 95.00th=[29492], 00:17:15.758 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[79168], 00:17:15.758 | 99.99th=[79168] 00:17:15.758 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:17:15.758 slat (usec): min=2, max=11089, avg=107.84, stdev=664.90 00:17:15.758 clat (usec): min=3846, max=32480, avg=12998.71, stdev=4572.22 00:17:15.758 lat (usec): min=5770, max=32491, avg=13106.54, stdev=4593.77 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 6783], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[ 9241], 00:17:15.758 | 30.00th=[10552], 40.00th=[11469], 50.00th=[11994], 60.00th=[12911], 00:17:15.758 | 70.00th=[13960], 80.00th=[15533], 90.00th=[18220], 95.00th=[22938], 00:17:15.758 | 99.00th=[28967], 99.50th=[30278], 99.90th=[32375], 99.95th=[32375], 00:17:15.758 | 99.99th=[32375] 00:17:15.758 bw ( KiB/s): min=13288, max=19480, per=28.95%, avg=16384.00, stdev=4378.41, samples=2 00:17:15.758 iops : min= 3322, max= 4870, avg=4096.00, stdev=1094.60, samples=2 00:17:15.758 lat (msec) : 4=0.01%, 10=16.42%, 20=67.63%, 50=14.45%, 100=1.49% 00:17:15.758 cpu : usr=3.18%, sys=4.88%, ctx=351, majf=0, minf=1 00:17:15.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.758 issued rwts: total=4093,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.758 job3: (groupid=0, jobs=1): err= 0: pid=613778: Wed Jul 24 17:42:37 2024 00:17:15.758 read: IOPS=2411, BW=9646KiB/s (9878kB/s)(9704KiB/1006msec) 00:17:15.758 slat (nsec): min=1860, max=19036k, avg=195052.51, stdev=1253960.10 00:17:15.758 clat (usec): min=1959, max=56333, avg=25594.49, stdev=10907.81 00:17:15.758 lat (usec): min=3320, max=56364, avg=25789.54, stdev=11006.08 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 8225], 5.00th=[11731], 10.00th=[14222], 20.00th=[16581], 00:17:15.758 | 30.00th=[18482], 40.00th=[20055], 50.00th=[22414], 60.00th=[25035], 00:17:15.758 | 70.00th=[31065], 80.00th=[36963], 90.00th=[42206], 95.00th=[45351], 00:17:15.758 | 99.00th=[52167], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:17:15.758 | 99.99th=[56361] 00:17:15.758 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:17:15.758 slat (usec): min=2, max=16735, avg=198.11, stdev=1093.94 00:17:15.758 clat (usec): min=1701, max=77557, avg=25572.12, stdev=17877.62 00:17:15.758 lat (usec): min=1740, max=77573, avg=25770.23, stdev=18011.78 00:17:15.758 clat percentiles (usec): 00:17:15.758 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[13698], 00:17:15.758 | 30.00th=[16450], 40.00th=[18482], 50.00th=[20055], 60.00th=[21365], 00:17:15.758 | 70.00th=[22938], 80.00th=[28181], 90.00th=[63701], 95.00th=[69731], 00:17:15.758 | 99.00th=[76022], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:17:15.758 | 99.99th=[77071] 00:17:15.758 bw ( KiB/s): min= 7376, max=13104, per=18.09%, avg=10240.00, stdev=4050.31, samples=2 00:17:15.758 iops : min= 1844, max= 3276, avg=2560.00, stdev=1012.58, samples=2 00:17:15.758 lat (msec) : 2=0.04%, 4=0.06%, 10=4.57%, 20=38.47%, 50=48.68% 00:17:15.758 lat (msec) : 100=8.18% 00:17:15.758 cpu : usr=2.29%, sys=2.69%, ctx=361, majf=0, minf=1 00:17:15.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:17:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.758 issued rwts: total=2426,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.758 00:17:15.758 Run status group 0 (all jobs): 00:17:15.758 READ: bw=52.7MiB/s (55.2MB/s), 8135KiB/s-19.5MiB/s (8330kB/s-20.5MB/s), io=53.1MiB (55.6MB), run=1003-1007msec 00:17:15.758 WRITE: bw=55.3MiB/s (58.0MB/s), 9486KiB/s-20.3MiB/s (9713kB/s-21.3MB/s), io=55.7MiB (58.4MB), run=1003-1007msec 00:17:15.758 00:17:15.758 Disk stats (read/write): 00:17:15.758 nvme0n1: ios=4119/4312, merge=0/0, ticks=25707/42380, in_queue=68087, util=95.49% 00:17:15.758 nvme0n2: ios=2090/2111, merge=0/0, ticks=28432/26842, in_queue=55274, util=99.29% 00:17:15.758 nvme0n3: ios=3129/3584, merge=0/0, ticks=51874/41035, in_queue=92909, util=93.78% 00:17:15.758 nvme0n4: ios=2089/2455, merge=0/0, ticks=28554/35137, in_queue=63691, util=99.37% 00:17:15.758 17:42:37 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:15.758 [global] 00:17:15.758 thread=1 00:17:15.758 invalidate=1 00:17:15.758 rw=randwrite 00:17:15.758 time_based=1 00:17:15.758 runtime=1 00:17:15.758 ioengine=libaio 00:17:15.758 direct=1 00:17:15.758 bs=4096 00:17:15.758 iodepth=128 00:17:15.759 norandommap=0 00:17:15.759 numjobs=1 00:17:15.759 00:17:15.759 verify_dump=1 00:17:15.759 verify_backlog=512 00:17:15.759 verify_state_save=0 00:17:15.759 do_verify=1 00:17:15.759 verify=crc32c-intel 00:17:15.759 [job0] 00:17:15.759 filename=/dev/nvme0n1 00:17:15.759 [job1] 00:17:15.759 filename=/dev/nvme0n2 00:17:15.759 [job2] 00:17:15.759 filename=/dev/nvme0n3 00:17:15.759 [job3] 00:17:15.759 filename=/dev/nvme0n4 00:17:15.759 Could not set queue depth (nvme0n1) 00:17:15.759 Could not set queue depth (nvme0n2) 00:17:15.759 Could not set queue depth (nvme0n3) 00:17:15.759 Could not set queue depth (nvme0n4) 00:17:16.016 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.016 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.017 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.017 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.017 fio-3.35 00:17:16.017 Starting 4 threads 00:17:17.415 00:17:17.415 job0: (groupid=0, jobs=1): err= 0: pid=614151: Wed Jul 24 17:42:38 2024 00:17:17.415 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:17:17.415 slat (nsec): min=1490, max=18506k, avg=118360.91, stdev=824449.28 00:17:17.415 clat (usec): min=5635, max=39765, avg=15844.22, stdev=5418.30 00:17:17.415 lat (usec): min=5643, max=41717, avg=15962.58, stdev=5450.95 00:17:17.415 clat percentiles (usec): 00:17:17.415 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11207], 00:17:17.415 | 30.00th=[12518], 40.00th=[14091], 50.00th=[15008], 60.00th=[16319], 00:17:17.415 | 70.00th=[17433], 80.00th=[19006], 90.00th=[23200], 95.00th=[26608], 00:17:17.415 | 99.00th=[32375], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:17:17.415 | 99.99th=[39584] 00:17:17.415 write: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1008msec); 0 zone resets 00:17:17.415 slat (usec): min=2, max=10769, avg=120.41, stdev=620.13 00:17:17.415 clat (usec): min=1249, max=38682, avg=15170.10, stdev=4605.57 00:17:17.415 lat (usec): min=1258, max=38690, avg=15290.50, stdev=4614.24 00:17:17.415 clat percentiles (usec): 00:17:17.415 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[10945], 00:17:17.415 | 30.00th=[12125], 40.00th=[13829], 50.00th=[15270], 60.00th=[17171], 00:17:17.415 | 70.00th=[18744], 80.00th=[19792], 90.00th=[20317], 95.00th=[21103], 00:17:17.415 | 99.00th=[23200], 99.50th=[25822], 99.90th=[38536], 99.95th=[38536], 00:17:17.415 | 99.99th=[38536] 00:17:17.415 bw ( KiB/s): min=16384, max=16384, per=28.05%, avg=16384.00, stdev= 0.00, samples=2 00:17:17.415 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:17:17.415 lat (msec) : 2=0.02%, 10=11.47%, 20=71.92%, 50=16.58% 00:17:17.415 cpu : usr=1.99%, sys=3.67%, ctx=576, majf=0, minf=1 00:17:17.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:17.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.415 issued rwts: total=4096,4153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.415 job1: (groupid=0, jobs=1): err= 0: pid=614152: Wed Jul 24 17:42:38 2024 00:17:17.415 read: IOPS=2920, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1008msec) 00:17:17.415 slat (nsec): min=1458, max=12605k, avg=149309.12, stdev=879942.52 00:17:17.415 clat (usec): min=6574, max=46000, avg=17617.34, stdev=6070.20 00:17:17.415 lat (usec): min=7143, max=46011, avg=17766.65, stdev=6134.52 00:17:17.415 clat percentiles (usec): 00:17:17.415 | 1.00th=[ 7635], 5.00th=[10552], 10.00th=[11600], 20.00th=[13566], 00:17:17.416 | 30.00th=[14484], 40.00th=[15664], 50.00th=[16450], 60.00th=[17695], 00:17:17.416 | 70.00th=[18220], 80.00th=[20579], 90.00th=[24773], 95.00th=[27657], 00:17:17.416 | 99.00th=[40633], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:17:17.416 | 99.99th=[45876] 00:17:17.416 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:17:17.416 slat (usec): min=2, max=41826, avg=174.71, stdev=1137.98 00:17:17.416 clat (usec): min=3351, max=66928, avg=24540.37, stdev=12998.78 00:17:17.416 lat (usec): min=3380, max=66943, avg=24715.08, stdev=13048.08 00:17:17.416 clat percentiles (usec): 00:17:17.416 | 1.00th=[ 6128], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[12125], 00:17:17.416 | 30.00th=[15401], 40.00th=[19268], 50.00th=[22414], 60.00th=[26608], 00:17:17.416 | 70.00th=[31851], 80.00th=[36439], 90.00th=[41681], 95.00th=[44303], 00:17:17.416 | 99.00th=[62129], 99.50th=[63701], 99.90th=[66847], 99.95th=[66847], 00:17:17.416 | 99.99th=[66847] 00:17:17.416 bw ( KiB/s): min=12288, max=12288, per=21.04%, avg=12288.00, stdev= 0.00, samples=2 00:17:17.416 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:17:17.416 lat (msec) : 4=0.05%, 10=7.63%, 20=53.42%, 50=37.12%, 100=1.78% 00:17:17.416 cpu : usr=2.68%, sys=2.78%, ctx=432, majf=0, minf=1 00:17:17.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:17.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.416 issued rwts: total=2944,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.416 job2: (groupid=0, jobs=1): err= 0: pid=614158: Wed Jul 24 17:42:38 2024 00:17:17.416 read: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1006msec) 00:17:17.416 slat (nsec): min=1112, max=21149k, avg=161111.99, stdev=1052921.24 00:17:17.416 clat (usec): min=3761, max=45900, avg=20385.69, stdev=8994.54 00:17:17.416 lat (usec): min=6074, max=45925, avg=20546.81, stdev=9069.93 00:17:17.416 clat percentiles (usec): 00:17:17.416 | 1.00th=[ 6521], 5.00th=[10552], 10.00th=[11469], 20.00th=[12649], 00:17:17.416 | 30.00th=[13829], 40.00th=[15008], 50.00th=[17433], 60.00th=[20841], 00:17:17.416 | 70.00th=[23462], 80.00th=[28967], 90.00th=[34866], 95.00th=[36963], 00:17:17.416 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[45351], 00:17:17.416 | 99.99th=[45876] 00:17:17.416 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:17:17.416 slat (nsec): min=1919, max=10787k, avg=121700.82, stdev=599334.92 00:17:17.416 clat (usec): min=1867, max=78548, avg=16197.74, stdev=7970.82 00:17:17.416 lat (usec): min=1877, max=78556, avg=16319.44, stdev=7979.65 00:17:17.416 clat percentiles (usec): 00:17:17.416 | 1.00th=[ 5473], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11863], 00:17:17.416 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13960], 60.00th=[15139], 00:17:17.416 | 70.00th=[17171], 80.00th=[20579], 90.00th=[23987], 95.00th=[26346], 00:17:17.416 | 99.00th=[61080], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:17:17.416 | 99.99th=[78119] 00:17:17.416 bw ( KiB/s): min=12288, max=16384, per=24.55%, avg=14336.00, stdev=2896.31, samples=2 00:17:17.416 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:17.416 lat (msec) : 2=0.04%, 4=0.19%, 10=5.76%, 20=61.71%, 50=31.72% 00:17:17.416 lat (msec) : 100=0.58% 00:17:17.416 cpu : usr=1.49%, sys=3.08%, ctx=618, majf=0, minf=1 00:17:17.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:17.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.416 issued rwts: total=3348,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.416 job3: (groupid=0, jobs=1): err= 0: pid=614160: Wed Jul 24 17:42:38 2024 00:17:17.416 read: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec) 00:17:17.416 slat (nsec): min=1551, max=9066.4k, avg=118052.01, stdev=629249.59 00:17:17.416 clat (usec): min=5998, max=45950, avg=14014.02, stdev=5567.26 00:17:17.416 lat (usec): min=6324, max=46735, avg=14132.08, stdev=5622.72 00:17:17.416 clat percentiles (usec): 00:17:17.416 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[10290], 00:17:17.416 | 30.00th=[10945], 40.00th=[11863], 50.00th=[12518], 60.00th=[13698], 00:17:17.416 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:17:17.416 | 99.00th=[37487], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:17:17.416 | 99.99th=[45876] 00:17:17.416 write: IOPS=3972, BW=15.5MiB/s (16.3MB/s)(15.8MiB/1017msec); 0 zone resets 00:17:17.416 slat (usec): min=2, max=10091, avg=140.78, stdev=661.20 00:17:17.416 clat (usec): min=1520, max=46709, avg=19488.92, stdev=9839.35 00:17:17.416 lat (usec): min=1533, max=46714, avg=19629.70, stdev=9891.67 00:17:17.416 clat percentiles (usec): 00:17:17.416 | 1.00th=[ 6718], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[10814], 00:17:17.416 | 30.00th=[12649], 40.00th=[14484], 50.00th=[16319], 60.00th=[19530], 00:17:17.416 | 70.00th=[22938], 80.00th=[29230], 90.00th=[35914], 95.00th=[38011], 00:17:17.416 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:17:17.416 | 99.99th=[46924] 00:17:17.416 bw ( KiB/s): min=14912, max=16384, per=26.79%, avg=15648.00, stdev=1040.86, samples=2 00:17:17.416 iops : min= 3728, max= 4096, avg=3912.00, stdev=260.22, samples=2 00:17:17.416 lat (msec) : 2=0.03%, 10=16.93%, 20=57.76%, 50=25.28% 00:17:17.416 cpu : usr=2.56%, sys=2.66%, ctx=585, majf=0, minf=1 00:17:17.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:17.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.416 issued rwts: total=3584,4040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.416 00:17:17.416 Run status group 0 (all jobs): 00:17:17.416 READ: bw=53.7MiB/s (56.3MB/s), 11.4MiB/s-15.9MiB/s (12.0MB/s-16.6MB/s), io=54.6MiB (57.2MB), run=1006-1017msec 00:17:17.416 WRITE: bw=57.0MiB/s (59.8MB/s), 11.9MiB/s-16.1MiB/s (12.5MB/s-16.9MB/s), io=58.0MiB (60.8MB), run=1006-1017msec 00:17:17.416 00:17:17.416 Disk stats (read/write): 00:17:17.416 nvme0n1: ios=3530/3584, merge=0/0, ticks=53382/51994, in_queue=105376, util=88.90% 00:17:17.416 nvme0n2: ios=2439/2560, merge=0/0, ticks=44209/58673, in_queue=102882, util=99.80% 00:17:17.416 nvme0n3: ios=3046/3072, merge=0/0, ticks=23068/20402, in_queue=43470, util=98.76% 00:17:17.416 nvme0n4: ios=3128/3315, merge=0/0, ticks=40969/64200, in_queue=105169, util=91.62% 00:17:17.416 17:42:38 -- target/fio.sh@55 -- # sync 00:17:17.416 17:42:38 -- target/fio.sh@59 -- # fio_pid=614386 00:17:17.416 17:42:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:17.416 17:42:38 -- target/fio.sh@61 -- # sleep 3 00:17:17.416 [global] 00:17:17.416 thread=1 00:17:17.416 invalidate=1 00:17:17.416 rw=read 00:17:17.416 time_based=1 00:17:17.416 runtime=10 00:17:17.416 ioengine=libaio 00:17:17.416 direct=1 00:17:17.416 bs=4096 00:17:17.416 iodepth=1 00:17:17.416 norandommap=1 00:17:17.416 numjobs=1 00:17:17.416 00:17:17.416 [job0] 00:17:17.416 filename=/dev/nvme0n1 00:17:17.416 [job1] 00:17:17.416 filename=/dev/nvme0n2 00:17:17.416 [job2] 00:17:17.416 filename=/dev/nvme0n3 00:17:17.416 [job3] 00:17:17.416 filename=/dev/nvme0n4 00:17:17.416 Could not set queue depth (nvme0n1) 00:17:17.416 Could not set queue depth (nvme0n2) 00:17:17.416 Could not set queue depth (nvme0n3) 00:17:17.416 Could not set queue depth (nvme0n4) 00:17:17.697 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.697 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.697 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.697 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.697 fio-3.35 00:17:17.697 Starting 4 threads 00:17:20.252 17:42:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:20.509 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:17:20.509 fio: pid=614611, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:20.509 17:42:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:20.509 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=737280, buflen=4096 00:17:20.510 fio: pid=614606, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:20.510 17:42:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:20.510 17:42:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:20.767 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=614400, buflen=4096 00:17:20.767 fio: pid=614573, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:20.767 17:42:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:20.767 17:42:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:21.025 17:42:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.025 17:42:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:21.025 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=774144, buflen=4096 00:17:21.025 fio: pid=614588, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:21.025 00:17:21.025 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=614573: Wed Jul 24 17:42:42 2024 00:17:21.025 read: IOPS=49, BW=197KiB/s (202kB/s)(600KiB/3049msec) 00:17:21.025 slat (usec): min=6, max=12557, avg=97.48, stdev=1020.79 00:17:21.025 clat (usec): min=458, max=43430, avg=20207.41, stdev=20749.87 00:17:21.025 lat (usec): min=465, max=55936, avg=20305.49, stdev=20875.99 00:17:21.025 clat percentiles (usec): 00:17:21.025 | 1.00th=[ 461], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 553], 00:17:21.025 | 30.00th=[ 619], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[41681], 00:17:21.025 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:21.025 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:21.025 | 99.99th=[43254] 00:17:21.025 bw ( KiB/s): min= 96, max= 96, per=13.42%, avg=96.00, stdev= 0.00, samples=5 00:17:21.025 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:21.025 lat (usec) : 500=5.96%, 750=45.03%, 1000=0.66% 00:17:21.025 lat (msec) : 2=0.66%, 50=47.02% 00:17:21.025 cpu : usr=0.00%, sys=0.16%, ctx=154, majf=0, minf=1 00:17:21.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.025 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=614588: Wed Jul 24 17:42:42 2024 00:17:21.025 read: IOPS=58, BW=231KiB/s (237kB/s)(756KiB/3266msec) 00:17:21.025 slat (usec): min=3, max=5501, avg=42.87, stdev=398.23 00:17:21.025 clat (usec): min=357, max=43080, avg=17222.83, stdev=20401.42 00:17:21.025 lat (usec): min=366, max=47106, avg=17265.76, stdev=20447.78 00:17:21.025 clat percentiles (usec): 00:17:21.025 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:17:21.025 | 30.00th=[ 461], 40.00th=[ 502], 50.00th=[ 660], 60.00th=[41681], 00:17:21.025 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:21.025 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:21.025 | 99.99th=[43254] 00:17:21.025 bw ( KiB/s): min= 88, max= 984, per=33.97%, avg=243.17, stdev=362.95, samples=6 00:17:21.025 iops : min= 22, max= 246, avg=60.67, stdev=90.80, samples=6 00:17:21.025 lat (usec) : 500=38.95%, 750=12.11%, 1000=1.58% 00:17:21.025 lat (msec) : 2=6.84%, 50=40.00% 00:17:21.025 cpu : usr=0.06%, sys=0.09%, ctx=191, majf=0, minf=1 00:17:21.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.025 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=614606: Wed Jul 24 17:42:42 2024 00:17:21.025 read: IOPS=62, BW=250KiB/s (256kB/s)(720KiB/2884msec) 00:17:21.025 slat (nsec): min=3423, max=33017, avg=13667.41, stdev=8388.61 00:17:21.025 clat (usec): min=357, max=43044, avg=15997.22, stdev=20113.02 00:17:21.025 lat (usec): min=364, max=43069, avg=16010.82, stdev=20120.85 00:17:21.025 clat percentiles (usec): 00:17:21.025 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:17:21.025 | 30.00th=[ 453], 40.00th=[ 498], 50.00th=[ 652], 60.00th=[ 1057], 00:17:21.025 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:21.025 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:21.025 | 99.99th=[43254] 00:17:21.025 bw ( KiB/s): min= 88, max= 984, per=38.03%, avg=272.00, stdev=398.04, samples=5 00:17:21.025 iops : min= 22, max= 246, avg=68.00, stdev=99.51, samples=5 00:17:21.025 lat (usec) : 500=41.99%, 750=12.15%, 1000=0.55% 00:17:21.025 lat (msec) : 2=7.18%, 4=0.55%, 50=37.02% 00:17:21.025 cpu : usr=0.00%, sys=0.17%, ctx=183, majf=0, minf=1 00:17:21.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 issued rwts: total=181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.025 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=614611: Wed Jul 24 17:42:42 2024 00:17:21.025 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(260KiB/2691msec) 00:17:21.025 slat (nsec): min=12994, max=34829, avg=22567.02, stdev=2224.48 00:17:21.025 clat (usec): min=1079, max=43053, avg=41348.60, stdev=5075.44 00:17:21.025 lat (usec): min=1113, max=43074, avg=41371.23, stdev=5073.92 00:17:21.025 clat percentiles (usec): 00:17:21.025 | 1.00th=[ 1074], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:21.025 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:21.025 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:21.025 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:21.025 | 99.99th=[43254] 00:17:21.025 bw ( KiB/s): min= 96, max= 96, per=13.42%, avg=96.00, stdev= 0.00, samples=5 00:17:21.025 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:21.025 lat (msec) : 2=1.52%, 50=96.97% 00:17:21.025 cpu : usr=0.11%, sys=0.00%, ctx=66, majf=0, minf=2 00:17:21.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.025 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.025 00:17:21.025 Run status group 0 (all jobs): 00:17:21.025 READ: bw=715KiB/s (732kB/s), 96.6KiB/s-250KiB/s (98.9kB/s-256kB/s), io=2336KiB (2392kB), run=2691-3266msec 00:17:21.025 00:17:21.025 Disk stats (read/write): 00:17:21.025 nvme0n1: ios=110/0, merge=0/0, ticks=3966/0, in_queue=3966, util=98.87% 00:17:21.025 nvme0n2: ios=227/0, merge=0/0, ticks=4253/0, in_queue=4253, util=98.70% 00:17:21.025 nvme0n3: ios=224/0, merge=0/0, ticks=3292/0, in_queue=3292, util=99.09% 00:17:21.025 nvme0n4: ios=62/0, merge=0/0, ticks=2563/0, in_queue=2563, util=96.45% 00:17:21.283 17:42:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.283 17:42:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:21.283 17:42:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.283 17:42:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:21.540 17:42:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.540 17:42:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:21.798 17:42:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.798 17:42:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:21.798 17:42:43 -- target/fio.sh@69 -- # fio_status=0 00:17:21.798 17:42:43 -- target/fio.sh@70 -- # wait 614386 00:17:21.798 17:42:43 -- target/fio.sh@70 -- # fio_status=4 00:17:21.798 17:42:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.056 17:42:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.056 17:42:43 -- common/autotest_common.sh@1198 -- # local i=0 00:17:22.056 17:42:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:22.056 17:42:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.056 17:42:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:22.056 17:42:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.056 17:42:43 -- common/autotest_common.sh@1210 -- # return 0 00:17:22.056 17:42:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:22.056 17:42:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:22.056 nvmf hotplug test: fio failed as expected 00:17:22.056 17:42:43 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.314 17:42:43 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:22.314 17:42:43 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:22.314 17:42:43 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:22.314 17:42:43 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:22.314 17:42:43 -- target/fio.sh@91 -- # nvmftestfini 00:17:22.314 17:42:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:22.314 17:42:43 -- nvmf/common.sh@116 -- # sync 00:17:22.314 17:42:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:22.314 17:42:43 -- nvmf/common.sh@119 -- # set +e 00:17:22.314 17:42:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:22.314 17:42:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:22.314 rmmod nvme_tcp 00:17:22.314 rmmod nvme_fabrics 00:17:22.314 rmmod nvme_keyring 00:17:22.314 17:42:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:22.314 17:42:43 -- nvmf/common.sh@123 -- # set -e 00:17:22.314 17:42:43 -- nvmf/common.sh@124 -- # return 0 00:17:22.314 17:42:43 -- nvmf/common.sh@477 -- # '[' -n 611642 ']' 00:17:22.314 17:42:43 -- nvmf/common.sh@478 -- # killprocess 611642 00:17:22.314 17:42:43 -- common/autotest_common.sh@926 -- # '[' -z 611642 ']' 00:17:22.314 17:42:43 -- common/autotest_common.sh@930 -- # kill -0 611642 00:17:22.314 17:42:43 -- common/autotest_common.sh@931 -- # uname 00:17:22.314 17:42:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.314 17:42:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 611642 00:17:22.314 17:42:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:22.314 17:42:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:22.314 17:42:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 611642' 00:17:22.314 killing process with pid 611642 00:17:22.314 17:42:43 -- common/autotest_common.sh@945 -- # kill 611642 00:17:22.314 17:42:43 -- common/autotest_common.sh@950 -- # wait 611642 00:17:22.573 17:42:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:22.573 17:42:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:22.573 17:42:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:22.573 17:42:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.573 17:42:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:22.573 17:42:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.573 17:42:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.573 17:42:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.471 17:42:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:24.471 00:17:24.471 real 0m26.115s 00:17:24.471 user 1m44.911s 00:17:24.471 sys 0m7.035s 00:17:24.471 17:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.471 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:17:24.471 ************************************ 00:17:24.471 END TEST nvmf_fio_target 00:17:24.471 ************************************ 00:17:24.729 17:42:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:24.729 17:42:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:24.729 17:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.729 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:17:24.729 ************************************ 00:17:24.729 START TEST nvmf_bdevio 00:17:24.729 ************************************ 00:17:24.729 17:42:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:24.729 * Looking for test storage... 00:17:24.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.729 17:42:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.729 17:42:46 -- nvmf/common.sh@7 -- # uname -s 00:17:24.729 17:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.729 17:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.729 17:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.729 17:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.729 17:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.729 17:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.729 17:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.729 17:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.729 17:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.729 17:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.729 17:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.729 17:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.729 17:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.729 17:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.729 17:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.729 17:42:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.729 17:42:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.729 17:42:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.729 17:42:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.729 17:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.729 17:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.729 17:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.729 17:42:46 -- paths/export.sh@5 -- # export PATH 00:17:24.729 17:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.729 17:42:46 -- nvmf/common.sh@46 -- # : 0 00:17:24.729 17:42:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:24.729 17:42:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:24.729 17:42:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:24.729 17:42:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.729 17:42:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.729 17:42:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:24.729 17:42:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:24.729 17:42:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:24.729 17:42:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.729 17:42:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.729 17:42:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:24.729 17:42:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:24.729 17:42:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.729 17:42:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:24.729 17:42:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:24.729 17:42:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:24.729 17:42:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.729 17:42:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.729 17:42:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.729 17:42:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:24.729 17:42:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:24.729 17:42:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:24.729 17:42:46 -- common/autotest_common.sh@10 -- # set +x 00:17:30.001 17:42:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:30.001 17:42:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:30.001 17:42:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:30.001 17:42:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:30.001 17:42:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:30.001 17:42:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:30.001 17:42:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:30.001 17:42:51 -- nvmf/common.sh@294 -- # net_devs=() 00:17:30.001 17:42:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:30.001 17:42:51 -- nvmf/common.sh@295 -- # e810=() 00:17:30.001 17:42:51 -- nvmf/common.sh@295 -- # local -ga e810 00:17:30.001 17:42:51 -- nvmf/common.sh@296 -- # x722=() 00:17:30.001 17:42:51 -- nvmf/common.sh@296 -- # local -ga x722 00:17:30.001 17:42:51 -- nvmf/common.sh@297 -- # mlx=() 00:17:30.001 17:42:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:30.001 17:42:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.001 17:42:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:30.001 17:42:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.001 17:42:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:30.001 17:42:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.001 17:42:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:30.001 17:42:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.001 17:42:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.001 17:42:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.001 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.001 17:42:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:30.001 17:42:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.001 17:42:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.001 17:42:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.001 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.001 17:42:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:30.001 17:42:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:30.001 17:42:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.001 17:42:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.001 17:42:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:30.001 17:42:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.001 17:42:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.001 17:42:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:30.001 17:42:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.001 17:42:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.001 17:42:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:30.001 17:42:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:30.001 17:42:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.001 17:42:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.001 17:42:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.001 17:42:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.001 17:42:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:30.001 17:42:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.001 17:42:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.001 17:42:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.001 17:42:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:30.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:30.001 00:17:30.001 --- 10.0.0.2 ping statistics --- 00:17:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.001 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:30.001 17:42:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:17:30.001 00:17:30.001 --- 10.0.0.1 ping statistics --- 00:17:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.001 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:17:30.001 17:42:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.001 17:42:51 -- nvmf/common.sh@410 -- # return 0 00:17:30.001 17:42:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:30.001 17:42:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.001 17:42:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:30.001 17:42:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.001 17:42:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:30.001 17:42:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:30.001 17:42:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:30.001 17:42:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:30.001 17:42:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:30.001 17:42:51 -- common/autotest_common.sh@10 -- # set +x 00:17:30.001 17:42:51 -- nvmf/common.sh@469 -- # nvmfpid=618792 00:17:30.001 17:42:51 -- nvmf/common.sh@470 -- # waitforlisten 618792 00:17:30.001 17:42:51 -- common/autotest_common.sh@819 -- # '[' -z 618792 ']' 00:17:30.001 17:42:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.001 17:42:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.001 17:42:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.001 17:42:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.001 17:42:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:30.001 17:42:51 -- common/autotest_common.sh@10 -- # set +x 00:17:30.259 [2024-07-24 17:42:51.616366] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:30.259 [2024-07-24 17:42:51.616411] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.259 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.259 [2024-07-24 17:42:51.674116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.259 [2024-07-24 17:42:51.753346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.259 [2024-07-24 17:42:51.753454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.259 [2024-07-24 17:42:51.753462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.259 [2024-07-24 17:42:51.753468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.259 [2024-07-24 17:42:51.753523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:30.259 [2024-07-24 17:42:51.753560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:30.259 [2024-07-24 17:42:51.753593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.259 [2024-07-24 17:42:51.753594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:30.821 17:42:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.821 17:42:52 -- common/autotest_common.sh@852 -- # return 0 00:17:30.821 17:42:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.821 17:42:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.821 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 17:42:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.079 17:42:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.079 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.079 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 [2024-07-24 17:42:52.450307] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.079 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.079 17:42:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.079 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.079 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 Malloc0 00:17:31.079 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.079 17:42:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.079 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.079 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.079 17:42:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.079 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.079 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.079 17:42:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.079 17:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.079 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 [2024-07-24 17:42:52.493611] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.079 17:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.079 17:42:52 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:31.079 17:42:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:31.079 17:42:52 -- nvmf/common.sh@520 -- # config=() 00:17:31.079 17:42:52 -- nvmf/common.sh@520 -- # local subsystem config 00:17:31.079 17:42:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:31.079 17:42:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:31.079 { 00:17:31.079 "params": { 00:17:31.079 "name": "Nvme$subsystem", 00:17:31.079 "trtype": "$TEST_TRANSPORT", 00:17:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.079 "adrfam": "ipv4", 00:17:31.079 "trsvcid": "$NVMF_PORT", 00:17:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.079 "hdgst": ${hdgst:-false}, 00:17:31.079 "ddgst": ${ddgst:-false} 00:17:31.079 }, 00:17:31.079 "method": "bdev_nvme_attach_controller" 00:17:31.079 } 00:17:31.079 EOF 00:17:31.079 )") 00:17:31.079 17:42:52 -- nvmf/common.sh@542 -- # cat 00:17:31.079 17:42:52 -- nvmf/common.sh@544 -- # jq . 00:17:31.079 17:42:52 -- nvmf/common.sh@545 -- # IFS=, 00:17:31.079 17:42:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:31.079 "params": { 00:17:31.079 "name": "Nvme1", 00:17:31.079 "trtype": "tcp", 00:17:31.079 "traddr": "10.0.0.2", 00:17:31.079 "adrfam": "ipv4", 00:17:31.079 "trsvcid": "4420", 00:17:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.079 "hdgst": false, 00:17:31.079 "ddgst": false 00:17:31.079 }, 00:17:31.079 "method": "bdev_nvme_attach_controller" 00:17:31.079 }' 00:17:31.079 [2024-07-24 17:42:52.538611] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:31.079 [2024-07-24 17:42:52.538655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619044 ] 00:17:31.079 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.079 [2024-07-24 17:42:52.592557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:31.079 [2024-07-24 17:42:52.665073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.079 [2024-07-24 17:42:52.665171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.079 [2024-07-24 17:42:52.665173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.336 [2024-07-24 17:42:52.817794] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:31.336 [2024-07-24 17:42:52.817827] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:31.336 I/O targets: 00:17:31.336 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:31.336 00:17:31.336 00:17:31.336 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.336 http://cunit.sourceforge.net/ 00:17:31.336 00:17:31.336 00:17:31.336 Suite: bdevio tests on: Nvme1n1 00:17:31.336 Test: blockdev write read block ...passed 00:17:31.336 Test: blockdev write zeroes read block ...passed 00:17:31.336 Test: blockdev write zeroes read no split ...passed 00:17:31.594 Test: blockdev write zeroes read split ...passed 00:17:31.594 Test: blockdev write zeroes read split partial ...passed 00:17:31.594 Test: blockdev reset ...[2024-07-24 17:42:53.062004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:31.594 [2024-07-24 17:42:53.062058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ba590 (9): Bad file descriptor 00:17:31.594 [2024-07-24 17:42:53.075912] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:31.594 passed 00:17:31.594 Test: blockdev write read 8 blocks ...passed 00:17:31.594 Test: blockdev write read size > 128k ...passed 00:17:31.594 Test: blockdev write read invalid size ...passed 00:17:31.594 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:31.594 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:31.594 Test: blockdev write read max offset ...passed 00:17:31.852 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:31.852 Test: blockdev writev readv 8 blocks ...passed 00:17:31.852 Test: blockdev writev readv 30 x 1block ...passed 00:17:31.852 Test: blockdev writev readv block ...passed 00:17:31.852 Test: blockdev writev readv size > 128k ...passed 00:17:31.852 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:31.852 Test: blockdev comparev and writev ...[2024-07-24 17:42:53.350879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.350907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.350921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.350929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.351507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.351523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.351534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.351541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.352031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.352041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.352064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.352537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.352547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.352558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.852 [2024-07-24 17:42:53.352566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:31.852 passed 00:17:31.852 Test: blockdev nvme passthru rw ...passed 00:17:31.852 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:42:53.435839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.852 [2024-07-24 17:42:53.435856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.436219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.852 [2024-07-24 17:42:53.436229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.436577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.852 [2024-07-24 17:42:53.436587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:31.852 [2024-07-24 17:42:53.436938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.852 [2024-07-24 17:42:53.436948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:31.852 passed 00:17:32.109 Test: blockdev nvme admin passthru ...passed 00:17:32.109 Test: blockdev copy ...passed 00:17:32.110 00:17:32.110 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.110 suites 1 1 n/a 0 0 00:17:32.110 tests 23 23 23 0 0 00:17:32.110 asserts 152 152 152 0 n/a 00:17:32.110 00:17:32.110 Elapsed time = 1.351 seconds 00:17:32.110 17:42:53 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.110 17:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:32.110 17:42:53 -- common/autotest_common.sh@10 -- # set +x 00:17:32.110 17:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:32.110 17:42:53 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:32.110 17:42:53 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:32.110 17:42:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:32.110 17:42:53 -- nvmf/common.sh@116 -- # sync 00:17:32.110 17:42:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:32.110 17:42:53 -- nvmf/common.sh@119 -- # set +e 00:17:32.110 17:42:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:32.110 17:42:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:32.110 rmmod nvme_tcp 00:17:32.367 rmmod nvme_fabrics 00:17:32.367 rmmod nvme_keyring 00:17:32.367 17:42:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:32.367 17:42:53 -- nvmf/common.sh@123 -- # set -e 00:17:32.367 17:42:53 -- nvmf/common.sh@124 -- # return 0 00:17:32.367 17:42:53 -- nvmf/common.sh@477 -- # '[' -n 618792 ']' 00:17:32.367 17:42:53 -- nvmf/common.sh@478 -- # killprocess 618792 00:17:32.367 17:42:53 -- common/autotest_common.sh@926 -- # '[' -z 618792 ']' 00:17:32.367 17:42:53 -- common/autotest_common.sh@930 -- # kill -0 618792 00:17:32.367 17:42:53 -- common/autotest_common.sh@931 -- # uname 00:17:32.367 17:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.367 17:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 618792 00:17:32.367 17:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:17:32.367 17:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:17:32.367 17:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 618792' 00:17:32.367 killing process with pid 618792 00:17:32.367 17:42:53 -- common/autotest_common.sh@945 -- # kill 618792 00:17:32.367 17:42:53 -- common/autotest_common.sh@950 -- # wait 618792 00:17:32.626 17:42:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:32.626 17:42:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:32.626 17:42:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:32.626 17:42:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.626 17:42:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:32.626 17:42:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.626 17:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.626 17:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.530 17:42:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:34.530 00:17:34.530 real 0m9.969s 00:17:34.531 user 0m12.625s 00:17:34.531 sys 0m4.530s 00:17:34.531 17:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.531 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.531 ************************************ 00:17:34.531 END TEST nvmf_bdevio 00:17:34.531 ************************************ 00:17:34.531 17:42:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:34.531 17:42:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.531 17:42:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:34.531 17:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:34.531 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.531 ************************************ 00:17:34.531 START TEST nvmf_bdevio_no_huge 00:17:34.531 ************************************ 00:17:34.531 17:42:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.789 * Looking for test storage... 00:17:34.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.790 17:42:56 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.790 17:42:56 -- nvmf/common.sh@7 -- # uname -s 00:17:34.790 17:42:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.790 17:42:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.790 17:42:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.790 17:42:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.790 17:42:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.790 17:42:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.790 17:42:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.790 17:42:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.790 17:42:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.790 17:42:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.790 17:42:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.790 17:42:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.790 17:42:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.790 17:42:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.790 17:42:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.790 17:42:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.790 17:42:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.790 17:42:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.790 17:42:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.790 17:42:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.790 17:42:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.790 17:42:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.790 17:42:56 -- paths/export.sh@5 -- # export PATH 00:17:34.790 17:42:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.790 17:42:56 -- nvmf/common.sh@46 -- # : 0 00:17:34.790 17:42:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:34.790 17:42:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:34.790 17:42:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:34.790 17:42:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.790 17:42:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.790 17:42:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:34.790 17:42:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:34.790 17:42:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:34.790 17:42:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.790 17:42:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.790 17:42:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:34.790 17:42:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:34.790 17:42:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.790 17:42:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:34.790 17:42:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:34.790 17:42:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:34.790 17:42:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.790 17:42:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.790 17:42:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.790 17:42:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:34.790 17:42:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:34.790 17:42:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:34.790 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.067 17:43:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:40.067 17:43:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:40.067 17:43:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:40.067 17:43:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:40.067 17:43:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:40.067 17:43:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:40.067 17:43:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:40.067 17:43:00 -- nvmf/common.sh@294 -- # net_devs=() 00:17:40.067 17:43:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:40.067 17:43:00 -- nvmf/common.sh@295 -- # e810=() 00:17:40.067 17:43:00 -- nvmf/common.sh@295 -- # local -ga e810 00:17:40.067 17:43:00 -- nvmf/common.sh@296 -- # x722=() 00:17:40.067 17:43:00 -- nvmf/common.sh@296 -- # local -ga x722 00:17:40.067 17:43:00 -- nvmf/common.sh@297 -- # mlx=() 00:17:40.067 17:43:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:40.067 17:43:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.067 17:43:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:40.067 17:43:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:40.067 17:43:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:40.067 17:43:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:40.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:40.067 17:43:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:40.067 17:43:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:40.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:40.067 17:43:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:40.067 17:43:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.067 17:43:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.067 17:43:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:40.067 Found net devices under 0000:86:00.0: cvl_0_0 00:17:40.067 17:43:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.067 17:43:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:40.067 17:43:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.067 17:43:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.067 17:43:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:40.067 Found net devices under 0000:86:00.1: cvl_0_1 00:17:40.067 17:43:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.067 17:43:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:40.067 17:43:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:40.067 17:43:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:40.067 17:43:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.067 17:43:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.067 17:43:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.067 17:43:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:40.067 17:43:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.067 17:43:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.067 17:43:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:40.067 17:43:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.067 17:43:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.067 17:43:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:40.067 17:43:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:40.067 17:43:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.067 17:43:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.067 17:43:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.067 17:43:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.067 17:43:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:40.067 17:43:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.067 17:43:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.067 17:43:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.067 17:43:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:40.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:40.067 00:17:40.067 --- 10.0.0.2 ping statistics --- 00:17:40.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.067 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:40.067 17:43:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:17:40.067 00:17:40.067 --- 10.0.0.1 ping statistics --- 00:17:40.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.067 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:17:40.067 17:43:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.067 17:43:01 -- nvmf/common.sh@410 -- # return 0 00:17:40.067 17:43:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:40.067 17:43:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.067 17:43:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:40.068 17:43:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:40.068 17:43:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.068 17:43:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:40.068 17:43:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:40.068 17:43:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:40.068 17:43:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:40.068 17:43:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:40.068 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 17:43:01 -- nvmf/common.sh@469 -- # nvmfpid=622586 00:17:40.068 17:43:01 -- nvmf/common.sh@470 -- # waitforlisten 622586 00:17:40.068 17:43:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:40.068 17:43:01 -- common/autotest_common.sh@819 -- # '[' -z 622586 ']' 00:17:40.068 17:43:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.068 17:43:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.068 17:43:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.068 17:43:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.068 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 [2024-07-24 17:43:01.282957] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:40.068 [2024-07-24 17:43:01.283001] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:40.068 [2024-07-24 17:43:01.344895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.068 [2024-07-24 17:43:01.426962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.068 [2024-07-24 17:43:01.427069] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.068 [2024-07-24 17:43:01.427077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.068 [2024-07-24 17:43:01.427083] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.068 [2024-07-24 17:43:01.427191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:40.068 [2024-07-24 17:43:01.427298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:40.068 [2024-07-24 17:43:01.427330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.068 [2024-07-24 17:43:01.427332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.675 17:43:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.675 17:43:02 -- common/autotest_common.sh@852 -- # return 0 00:17:40.675 17:43:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:40.675 17:43:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:40.675 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.675 17:43:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.675 17:43:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.675 17:43:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:40.675 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.675 [2024-07-24 17:43:02.126630] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.675 17:43:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:40.675 17:43:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.675 17:43:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:40.675 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.675 Malloc0 00:17:40.675 17:43:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:40.675 17:43:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.675 17:43:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:40.675 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.675 17:43:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:40.676 17:43:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.676 17:43:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:40.676 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 17:43:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:40.676 17:43:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.676 17:43:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:40.676 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.676 [2024-07-24 17:43:02.162872] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.676 17:43:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:40.676 17:43:02 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:40.676 17:43:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:40.676 17:43:02 -- nvmf/common.sh@520 -- # config=() 00:17:40.676 17:43:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:40.676 17:43:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:40.676 17:43:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:40.676 { 00:17:40.676 "params": { 00:17:40.676 "name": "Nvme$subsystem", 00:17:40.676 "trtype": "$TEST_TRANSPORT", 00:17:40.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.676 "adrfam": "ipv4", 00:17:40.676 "trsvcid": "$NVMF_PORT", 00:17:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.676 "hdgst": ${hdgst:-false}, 00:17:40.676 "ddgst": ${ddgst:-false} 00:17:40.676 }, 00:17:40.676 "method": "bdev_nvme_attach_controller" 00:17:40.676 } 00:17:40.676 EOF 00:17:40.676 )") 00:17:40.676 17:43:02 -- nvmf/common.sh@542 -- # cat 00:17:40.676 17:43:02 -- nvmf/common.sh@544 -- # jq . 00:17:40.676 17:43:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:40.676 17:43:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:40.676 "params": { 00:17:40.676 "name": "Nvme1", 00:17:40.676 "trtype": "tcp", 00:17:40.676 "traddr": "10.0.0.2", 00:17:40.676 "adrfam": "ipv4", 00:17:40.676 "trsvcid": "4420", 00:17:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.676 "hdgst": false, 00:17:40.676 "ddgst": false 00:17:40.676 }, 00:17:40.676 "method": "bdev_nvme_attach_controller" 00:17:40.676 }' 00:17:40.676 [2024-07-24 17:43:02.209245] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:40.676 [2024-07-24 17:43:02.209295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid622835 ] 00:17:40.676 [2024-07-24 17:43:02.267762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.934 [2024-07-24 17:43:02.351462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.934 [2024-07-24 17:43:02.351559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.934 [2024-07-24 17:43:02.351559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.191 [2024-07-24 17:43:02.610013] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:41.191 [2024-07-24 17:43:02.610048] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:41.191 I/O targets: 00:17:41.191 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:41.191 00:17:41.191 00:17:41.191 CUnit - A unit testing framework for C - Version 2.1-3 00:17:41.191 http://cunit.sourceforge.net/ 00:17:41.191 00:17:41.191 00:17:41.191 Suite: bdevio tests on: Nvme1n1 00:17:41.191 Test: blockdev write read block ...passed 00:17:41.191 Test: blockdev write zeroes read block ...passed 00:17:41.191 Test: blockdev write zeroes read no split ...passed 00:17:41.191 Test: blockdev write zeroes read split ...passed 00:17:41.449 Test: blockdev write zeroes read split partial ...passed 00:17:41.449 Test: blockdev reset ...[2024-07-24 17:43:02.811601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.449 [2024-07-24 17:43:02.811659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2ea0 (9): Bad file descriptor 00:17:41.449 [2024-07-24 17:43:02.869842] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.449 passed 00:17:41.449 Test: blockdev write read 8 blocks ...passed 00:17:41.449 Test: blockdev write read size > 128k ...passed 00:17:41.449 Test: blockdev write read invalid size ...passed 00:17:41.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.449 Test: blockdev write read max offset ...passed 00:17:41.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.706 Test: blockdev writev readv 8 blocks ...passed 00:17:41.706 Test: blockdev writev readv 30 x 1block ...passed 00:17:41.706 Test: blockdev writev readv block ...passed 00:17:41.706 Test: blockdev writev readv size > 128k ...passed 00:17:41.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:41.706 Test: blockdev comparev and writev ...[2024-07-24 17:43:03.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.105103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.105118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.105127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.105626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.105637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.105648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.105656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.106169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.106181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.106194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.706 [2024-07-24 17:43:03.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.706 [2024-07-24 17:43:03.106693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.707 [2024-07-24 17:43:03.106703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.707 [2024-07-24 17:43:03.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.707 [2024-07-24 17:43:03.106722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.707 passed 00:17:41.707 Test: blockdev nvme passthru rw ...passed 00:17:41.707 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:43:03.190756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.707 [2024-07-24 17:43:03.190770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.707 [2024-07-24 17:43:03.191118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.707 [2024-07-24 17:43:03.191130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.707 [2024-07-24 17:43:03.191479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.707 [2024-07-24 17:43:03.191489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.707 [2024-07-24 17:43:03.191836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.707 [2024-07-24 17:43:03.191846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.707 passed 00:17:41.707 Test: blockdev nvme admin passthru ...passed 00:17:41.707 Test: blockdev copy ...passed 00:17:41.707 00:17:41.707 Run Summary: Type Total Ran Passed Failed Inactive 00:17:41.707 suites 1 1 n/a 0 0 00:17:41.707 tests 23 23 23 0 0 00:17:41.707 asserts 152 152 152 0 n/a 00:17:41.707 00:17:41.707 Elapsed time = 1.266 seconds 00:17:41.963 17:43:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.963 17:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.963 17:43:03 -- common/autotest_common.sh@10 -- # set +x 00:17:41.963 17:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.963 17:43:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.963 17:43:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.963 17:43:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:41.963 17:43:03 -- nvmf/common.sh@116 -- # sync 00:17:41.963 17:43:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:41.963 17:43:03 -- nvmf/common.sh@119 -- # set +e 00:17:41.963 17:43:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:41.963 17:43:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:41.963 rmmod nvme_tcp 00:17:42.220 rmmod nvme_fabrics 00:17:42.220 rmmod nvme_keyring 00:17:42.220 17:43:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.220 17:43:03 -- nvmf/common.sh@123 -- # set -e 00:17:42.220 17:43:03 -- nvmf/common.sh@124 -- # return 0 00:17:42.220 17:43:03 -- nvmf/common.sh@477 -- # '[' -n 622586 ']' 00:17:42.220 17:43:03 -- nvmf/common.sh@478 -- # killprocess 622586 00:17:42.220 17:43:03 -- common/autotest_common.sh@926 -- # '[' -z 622586 ']' 00:17:42.220 17:43:03 -- common/autotest_common.sh@930 -- # kill -0 622586 00:17:42.220 17:43:03 -- common/autotest_common.sh@931 -- # uname 00:17:42.220 17:43:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.220 17:43:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 622586 00:17:42.220 17:43:03 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:17:42.220 17:43:03 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:17:42.220 17:43:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 622586' 00:17:42.220 killing process with pid 622586 00:17:42.220 17:43:03 -- common/autotest_common.sh@945 -- # kill 622586 00:17:42.220 17:43:03 -- common/autotest_common.sh@950 -- # wait 622586 00:17:42.479 17:43:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.479 17:43:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.479 17:43:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.479 17:43:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.479 17:43:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.479 17:43:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.479 17:43:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.479 17:43:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.016 17:43:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.016 00:17:45.016 real 0m9.959s 00:17:45.016 user 0m13.691s 00:17:45.016 sys 0m4.672s 00:17:45.016 17:43:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.016 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.016 ************************************ 00:17:45.016 END TEST nvmf_bdevio_no_huge 00:17:45.016 ************************************ 00:17:45.016 17:43:06 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.016 17:43:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:45.016 17:43:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.016 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.016 ************************************ 00:17:45.016 START TEST nvmf_tls 00:17:45.016 ************************************ 00:17:45.016 17:43:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.016 * Looking for test storage... 00:17:45.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.016 17:43:06 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.016 17:43:06 -- nvmf/common.sh@7 -- # uname -s 00:17:45.016 17:43:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.016 17:43:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.016 17:43:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.016 17:43:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.016 17:43:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.016 17:43:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.016 17:43:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.016 17:43:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.016 17:43:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.016 17:43:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.016 17:43:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.016 17:43:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.016 17:43:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.016 17:43:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.016 17:43:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.016 17:43:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.016 17:43:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.017 17:43:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.017 17:43:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.017 17:43:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.017 17:43:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.017 17:43:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.017 17:43:06 -- paths/export.sh@5 -- # export PATH 00:17:45.017 17:43:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.017 17:43:06 -- nvmf/common.sh@46 -- # : 0 00:17:45.017 17:43:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.017 17:43:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.017 17:43:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.017 17:43:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.017 17:43:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.017 17:43:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.017 17:43:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.017 17:43:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.017 17:43:06 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.017 17:43:06 -- target/tls.sh@71 -- # nvmftestinit 00:17:45.017 17:43:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.017 17:43:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.017 17:43:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.017 17:43:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.017 17:43:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.017 17:43:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.017 17:43:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.017 17:43:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.017 17:43:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:45.017 17:43:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:45.017 17:43:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:45.017 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:17:50.289 17:43:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:50.289 17:43:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:50.289 17:43:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:50.289 17:43:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:50.289 17:43:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:50.289 17:43:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:50.289 17:43:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:50.289 17:43:11 -- nvmf/common.sh@294 -- # net_devs=() 00:17:50.289 17:43:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:50.289 17:43:11 -- nvmf/common.sh@295 -- # e810=() 00:17:50.289 17:43:11 -- nvmf/common.sh@295 -- # local -ga e810 00:17:50.289 17:43:11 -- nvmf/common.sh@296 -- # x722=() 00:17:50.289 17:43:11 -- nvmf/common.sh@296 -- # local -ga x722 00:17:50.289 17:43:11 -- nvmf/common.sh@297 -- # mlx=() 00:17:50.289 17:43:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:50.289 17:43:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.289 17:43:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:50.289 17:43:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:50.289 17:43:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.289 17:43:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:50.289 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:50.289 17:43:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.289 17:43:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:50.289 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:50.289 17:43:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.289 17:43:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.289 17:43:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.289 17:43:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:50.289 Found net devices under 0000:86:00.0: cvl_0_0 00:17:50.289 17:43:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.289 17:43:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.289 17:43:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.289 17:43:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.289 17:43:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:50.289 Found net devices under 0000:86:00.1: cvl_0_1 00:17:50.289 17:43:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.289 17:43:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:50.289 17:43:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:50.289 17:43:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:50.289 17:43:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.289 17:43:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.289 17:43:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.289 17:43:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:50.289 17:43:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.289 17:43:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.289 17:43:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:50.289 17:43:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.290 17:43:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.290 17:43:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:50.290 17:43:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:50.290 17:43:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.290 17:43:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.290 17:43:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.290 17:43:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.290 17:43:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:50.290 17:43:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.290 17:43:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.290 17:43:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.290 17:43:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:50.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:50.290 00:17:50.290 --- 10.0.0.2 ping statistics --- 00:17:50.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.290 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:50.290 17:43:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:17:50.290 00:17:50.290 --- 10.0.0.1 ping statistics --- 00:17:50.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.290 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:17:50.290 17:43:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.290 17:43:11 -- nvmf/common.sh@410 -- # return 0 00:17:50.290 17:43:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.290 17:43:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.290 17:43:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.290 17:43:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.290 17:43:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.290 17:43:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.290 17:43:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.290 17:43:11 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:50.290 17:43:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.290 17:43:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.290 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.290 17:43:11 -- nvmf/common.sh@469 -- # nvmfpid=626446 00:17:50.290 17:43:11 -- nvmf/common.sh@470 -- # waitforlisten 626446 00:17:50.290 17:43:11 -- common/autotest_common.sh@819 -- # '[' -z 626446 ']' 00:17:50.290 17:43:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.290 17:43:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.290 17:43:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.290 17:43:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.290 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.290 17:43:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:50.290 [2024-07-24 17:43:11.767436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:50.290 [2024-07-24 17:43:11.767479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.290 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.290 [2024-07-24 17:43:11.824841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.549 [2024-07-24 17:43:11.901967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.549 [2024-07-24 17:43:11.902075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.549 [2024-07-24 17:43:11.902083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.549 [2024-07-24 17:43:11.902090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.549 [2024-07-24 17:43:11.902107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.116 17:43:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.116 17:43:12 -- common/autotest_common.sh@852 -- # return 0 00:17:51.116 17:43:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.116 17:43:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:51.116 17:43:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.116 17:43:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.116 17:43:12 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:51.116 17:43:12 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:51.374 true 00:17:51.374 17:43:12 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.374 17:43:12 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:51.374 17:43:12 -- target/tls.sh@82 -- # version=0 00:17:51.374 17:43:12 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:51.375 17:43:12 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:51.633 17:43:13 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.633 17:43:13 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:51.891 17:43:13 -- target/tls.sh@90 -- # version=13 00:17:51.891 17:43:13 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:51.891 17:43:13 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:51.891 17:43:13 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.891 17:43:13 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:52.149 17:43:13 -- target/tls.sh@98 -- # version=7 00:17:52.149 17:43:13 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:52.149 17:43:13 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.149 17:43:13 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:52.149 17:43:13 -- target/tls.sh@105 -- # ktls=false 00:17:52.149 17:43:13 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:52.149 17:43:13 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:52.408 17:43:13 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.408 17:43:13 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:52.666 17:43:14 -- target/tls.sh@113 -- # ktls=true 00:17:52.666 17:43:14 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:52.666 17:43:14 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:52.666 17:43:14 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.666 17:43:14 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:52.924 17:43:14 -- target/tls.sh@121 -- # ktls=false 00:17:52.924 17:43:14 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:52.924 17:43:14 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:52.924 17:43:14 -- target/tls.sh@49 -- # local key hash crc 00:17:52.924 17:43:14 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:52.924 17:43:14 -- target/tls.sh@51 -- # hash=01 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # gzip -1 -c 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # tail -c8 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # head -c 4 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # crc='p$H�' 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.924 17:43:14 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.924 17:43:14 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:52.924 17:43:14 -- target/tls.sh@49 -- # local key hash crc 00:17:52.924 17:43:14 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:52.924 17:43:14 -- target/tls.sh@51 -- # hash=01 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # tail -c8 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # head -c 4 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # gzip -1 -c 00:17:52.924 17:43:14 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:52.924 17:43:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.924 17:43:14 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.924 17:43:14 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:52.924 17:43:14 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:52.924 17:43:14 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.924 17:43:14 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.924 17:43:14 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:52.924 17:43:14 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:52.924 17:43:14 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:53.183 17:43:14 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:53.441 17:43:14 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:53.441 17:43:14 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:53.441 17:43:14 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.441 [2024-07-24 17:43:15.000001] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.441 17:43:15 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.699 17:43:15 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.957 [2024-07-24 17:43:15.332889] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.957 [2024-07-24 17:43:15.333077] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.957 17:43:15 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.957 malloc0 00:17:53.957 17:43:15 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.216 17:43:15 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:54.475 17:43:15 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:54.475 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.447 Initializing NVMe Controllers 00:18:04.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.447 Initialization complete. Launching workers. 00:18:04.447 ======================================================== 00:18:04.447 Latency(us) 00:18:04.447 Device Information : IOPS MiB/s Average min max 00:18:04.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17164.15 67.05 3729.08 794.60 6171.60 00:18:04.447 ======================================================== 00:18:04.447 Total : 17164.15 67.05 3729.08 794.60 6171.60 00:18:04.447 00:18:04.447 17:43:25 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:04.447 17:43:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.447 17:43:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.447 17:43:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.447 17:43:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:04.447 17:43:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.447 17:43:25 -- target/tls.sh@28 -- # bdevperf_pid=628947 00:18:04.447 17:43:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.447 17:43:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.447 17:43:25 -- target/tls.sh@31 -- # waitforlisten 628947 /var/tmp/bdevperf.sock 00:18:04.447 17:43:25 -- common/autotest_common.sh@819 -- # '[' -z 628947 ']' 00:18:04.447 17:43:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.447 17:43:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.447 17:43:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.447 17:43:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.447 17:43:25 -- common/autotest_common.sh@10 -- # set +x 00:18:04.447 [2024-07-24 17:43:25.953062] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:04.447 [2024-07-24 17:43:25.953108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628947 ] 00:18:04.447 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.447 [2024-07-24 17:43:26.003253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.759 [2024-07-24 17:43:26.080679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.325 17:43:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:05.325 17:43:26 -- common/autotest_common.sh@852 -- # return 0 00:18:05.325 17:43:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:05.325 [2024-07-24 17:43:26.906226] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.583 TLSTESTn1 00:18:05.583 17:43:27 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:05.583 Running I/O for 10 seconds... 00:18:17.784 00:18:17.784 Latency(us) 00:18:17.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.784 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.784 Verification LBA range: start 0x0 length 0x2000 00:18:17.784 TLSTESTn1 : 10.05 1432.08 5.59 0.00 0.00 89257.19 7921.31 129476.34 00:18:17.784 =================================================================================================================== 00:18:17.784 Total : 1432.08 5.59 0.00 0.00 89257.19 7921.31 129476.34 00:18:17.784 0 00:18:17.784 17:43:37 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.784 17:43:37 -- target/tls.sh@45 -- # killprocess 628947 00:18:17.784 17:43:37 -- common/autotest_common.sh@926 -- # '[' -z 628947 ']' 00:18:17.784 17:43:37 -- common/autotest_common.sh@930 -- # kill -0 628947 00:18:17.784 17:43:37 -- common/autotest_common.sh@931 -- # uname 00:18:17.784 17:43:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.784 17:43:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 628947 00:18:17.784 17:43:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:17.784 17:43:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:17.784 17:43:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 628947' 00:18:17.784 killing process with pid 628947 00:18:17.784 17:43:37 -- common/autotest_common.sh@945 -- # kill 628947 00:18:17.784 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.784 00:18:17.784 Latency(us) 00:18:17.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.784 =================================================================================================================== 00:18:17.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.784 17:43:37 -- common/autotest_common.sh@950 -- # wait 628947 00:18:17.784 17:43:37 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:17.784 17:43:37 -- common/autotest_common.sh@640 -- # local es=0 00:18:17.784 17:43:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:17.784 17:43:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:17.784 17:43:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.784 17:43:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:17.784 17:43:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.784 17:43:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:17.784 17:43:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.784 17:43:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.784 17:43:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.784 17:43:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:18:17.784 17:43:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.784 17:43:37 -- target/tls.sh@28 -- # bdevperf_pid=630881 00:18:17.784 17:43:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.784 17:43:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.784 17:43:37 -- target/tls.sh@31 -- # waitforlisten 630881 /var/tmp/bdevperf.sock 00:18:17.784 17:43:37 -- common/autotest_common.sh@819 -- # '[' -z 630881 ']' 00:18:17.784 17:43:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.784 17:43:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.784 17:43:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.784 17:43:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.784 17:43:37 -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 [2024-07-24 17:43:37.490724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:17.785 [2024-07-24 17:43:37.490770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630881 ] 00:18:17.785 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.785 [2024-07-24 17:43:37.539407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.785 [2024-07-24 17:43:37.609920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.785 17:43:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.785 17:43:38 -- common/autotest_common.sh@852 -- # return 0 00:18:17.785 17:43:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:17.785 [2024-07-24 17:43:38.440662] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.785 [2024-07-24 17:43:38.445462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.785 [2024-07-24 17:43:38.446141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25290c0 (107): Transport endpoint is not connected 00:18:17.785 [2024-07-24 17:43:38.447133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25290c0 (9): Bad file descriptor 00:18:17.785 [2024-07-24 17:43:38.448134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.785 [2024-07-24 17:43:38.448144] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.785 [2024-07-24 17:43:38.448151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.785 request: 00:18:17.785 { 00:18:17.785 "name": "TLSTEST", 00:18:17.785 "trtype": "tcp", 00:18:17.785 "traddr": "10.0.0.2", 00:18:17.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.785 "adrfam": "ipv4", 00:18:17.785 "trsvcid": "4420", 00:18:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.785 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:18:17.785 "method": "bdev_nvme_attach_controller", 00:18:17.785 "req_id": 1 00:18:17.785 } 00:18:17.785 Got JSON-RPC error response 00:18:17.785 response: 00:18:17.785 { 00:18:17.785 "code": -32602, 00:18:17.785 "message": "Invalid parameters" 00:18:17.785 } 00:18:17.785 17:43:38 -- target/tls.sh@36 -- # killprocess 630881 00:18:17.785 17:43:38 -- common/autotest_common.sh@926 -- # '[' -z 630881 ']' 00:18:17.785 17:43:38 -- common/autotest_common.sh@930 -- # kill -0 630881 00:18:17.785 17:43:38 -- common/autotest_common.sh@931 -- # uname 00:18:17.785 17:43:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.785 17:43:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 630881 00:18:17.785 17:43:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:17.785 17:43:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:17.785 17:43:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 630881' 00:18:17.785 killing process with pid 630881 00:18:17.785 17:43:38 -- common/autotest_common.sh@945 -- # kill 630881 00:18:17.785 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.785 00:18:17.785 Latency(us) 00:18:17.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.785 =================================================================================================================== 00:18:17.785 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.785 17:43:38 -- common/autotest_common.sh@950 -- # wait 630881 00:18:17.785 17:43:38 -- target/tls.sh@37 -- # return 1 00:18:17.785 17:43:38 -- common/autotest_common.sh@643 -- # es=1 00:18:17.785 17:43:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:17.785 17:43:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:17.785 17:43:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:17.785 17:43:38 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:17.785 17:43:38 -- common/autotest_common.sh@640 -- # local es=0 00:18:17.785 17:43:38 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:17.785 17:43:38 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:17.785 17:43:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.785 17:43:38 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:17.785 17:43:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:17.785 17:43:38 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:17.785 17:43:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.785 17:43:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.785 17:43:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:17.785 17:43:38 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:17.785 17:43:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.785 17:43:38 -- target/tls.sh@28 -- # bdevperf_pid=631120 00:18:17.785 17:43:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.785 17:43:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.785 17:43:38 -- target/tls.sh@31 -- # waitforlisten 631120 /var/tmp/bdevperf.sock 00:18:17.785 17:43:38 -- common/autotest_common.sh@819 -- # '[' -z 631120 ']' 00:18:17.785 17:43:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.785 17:43:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.785 17:43:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.785 17:43:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.785 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.785 [2024-07-24 17:43:38.759295] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:17.785 [2024-07-24 17:43:38.759341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631120 ] 00:18:17.785 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.785 [2024-07-24 17:43:38.808956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.785 [2024-07-24 17:43:38.885748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.043 17:43:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.043 17:43:39 -- common/autotest_common.sh@852 -- # return 0 00:18:18.043 17:43:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:18.302 [2024-07-24 17:43:39.699443] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.302 [2024-07-24 17:43:39.706896] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.302 [2024-07-24 17:43:39.706917] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:18.302 [2024-07-24 17:43:39.706938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.302 [2024-07-24 17:43:39.707340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228f0c0 (107): Transport endpoint is not connected 00:18:18.302 [2024-07-24 17:43:39.707986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228f0c0 (9): Bad file descriptor 00:18:18.302 [2024-07-24 17:43:39.708987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.302 [2024-07-24 17:43:39.709000] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.302 [2024-07-24 17:43:39.709007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.302 request: 00:18:18.302 { 00:18:18.302 "name": "TLSTEST", 00:18:18.302 "trtype": "tcp", 00:18:18.302 "traddr": "10.0.0.2", 00:18:18.302 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:18.302 "adrfam": "ipv4", 00:18:18.302 "trsvcid": "4420", 00:18:18.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.302 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:18:18.302 "method": "bdev_nvme_attach_controller", 00:18:18.302 "req_id": 1 00:18:18.302 } 00:18:18.302 Got JSON-RPC error response 00:18:18.302 response: 00:18:18.302 { 00:18:18.302 "code": -32602, 00:18:18.302 "message": "Invalid parameters" 00:18:18.302 } 00:18:18.302 17:43:39 -- target/tls.sh@36 -- # killprocess 631120 00:18:18.303 17:43:39 -- common/autotest_common.sh@926 -- # '[' -z 631120 ']' 00:18:18.303 17:43:39 -- common/autotest_common.sh@930 -- # kill -0 631120 00:18:18.303 17:43:39 -- common/autotest_common.sh@931 -- # uname 00:18:18.303 17:43:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:18.303 17:43:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 631120 00:18:18.303 17:43:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:18.303 17:43:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:18.303 17:43:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 631120' 00:18:18.303 killing process with pid 631120 00:18:18.303 17:43:39 -- common/autotest_common.sh@945 -- # kill 631120 00:18:18.303 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.303 00:18:18.303 Latency(us) 00:18:18.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.303 =================================================================================================================== 00:18:18.303 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.303 17:43:39 -- common/autotest_common.sh@950 -- # wait 631120 00:18:18.562 17:43:39 -- target/tls.sh@37 -- # return 1 00:18:18.562 17:43:39 -- common/autotest_common.sh@643 -- # es=1 00:18:18.562 17:43:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:18.562 17:43:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:18.562 17:43:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:18.562 17:43:39 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:18.562 17:43:39 -- common/autotest_common.sh@640 -- # local es=0 00:18:18.562 17:43:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:18.562 17:43:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:18.562 17:43:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:18.562 17:43:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:18.562 17:43:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:18.562 17:43:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:18.562 17:43:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.562 17:43:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:18.562 17:43:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.562 17:43:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:18.562 17:43:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.562 17:43:39 -- target/tls.sh@28 -- # bdevperf_pid=631281 00:18:18.562 17:43:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.562 17:43:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.562 17:43:39 -- target/tls.sh@31 -- # waitforlisten 631281 /var/tmp/bdevperf.sock 00:18:18.562 17:43:39 -- common/autotest_common.sh@819 -- # '[' -z 631281 ']' 00:18:18.562 17:43:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.562 17:43:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:18.562 17:43:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.562 17:43:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:18.562 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:18:18.562 [2024-07-24 17:43:40.014721] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:18.562 [2024-07-24 17:43:40.014829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631281 ] 00:18:18.562 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.562 [2024-07-24 17:43:40.067656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.562 [2024-07-24 17:43:40.154078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.498 17:43:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:19.498 17:43:40 -- common/autotest_common.sh@852 -- # return 0 00:18:19.498 17:43:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:19.498 [2024-07-24 17:43:40.976905] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.498 [2024-07-24 17:43:40.981779] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:19.498 [2024-07-24 17:43:40.981802] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:19.498 [2024-07-24 17:43:40.981828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.498 [2024-07-24 17:43:40.982465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a480c0 (107): Transport endpoint is not connected 00:18:19.498 [2024-07-24 17:43:40.983457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a480c0 (9): Bad file descriptor 00:18:19.498 [2024-07-24 17:43:40.984459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:19.498 [2024-07-24 17:43:40.984475] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.498 [2024-07-24 17:43:40.984482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:19.498 request: 00:18:19.498 { 00:18:19.498 "name": "TLSTEST", 00:18:19.498 "trtype": "tcp", 00:18:19.498 "traddr": "10.0.0.2", 00:18:19.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.498 "adrfam": "ipv4", 00:18:19.498 "trsvcid": "4420", 00:18:19.498 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:19.498 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:18:19.498 "method": "bdev_nvme_attach_controller", 00:18:19.498 "req_id": 1 00:18:19.498 } 00:18:19.498 Got JSON-RPC error response 00:18:19.498 response: 00:18:19.498 { 00:18:19.498 "code": -32602, 00:18:19.498 "message": "Invalid parameters" 00:18:19.498 } 00:18:19.498 17:43:40 -- target/tls.sh@36 -- # killprocess 631281 00:18:19.498 17:43:40 -- common/autotest_common.sh@926 -- # '[' -z 631281 ']' 00:18:19.498 17:43:40 -- common/autotest_common.sh@930 -- # kill -0 631281 00:18:19.498 17:43:40 -- common/autotest_common.sh@931 -- # uname 00:18:19.498 17:43:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.498 17:43:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 631281 00:18:19.498 17:43:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:19.498 17:43:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:19.498 17:43:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 631281' 00:18:19.498 killing process with pid 631281 00:18:19.498 17:43:41 -- common/autotest_common.sh@945 -- # kill 631281 00:18:19.498 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.498 00:18:19.498 Latency(us) 00:18:19.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.498 =================================================================================================================== 00:18:19.498 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.498 17:43:41 -- common/autotest_common.sh@950 -- # wait 631281 00:18:19.757 17:43:41 -- target/tls.sh@37 -- # return 1 00:18:19.757 17:43:41 -- common/autotest_common.sh@643 -- # es=1 00:18:19.757 17:43:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:19.757 17:43:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:19.757 17:43:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:19.757 17:43:41 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.757 17:43:41 -- common/autotest_common.sh@640 -- # local es=0 00:18:19.757 17:43:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.757 17:43:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:19.757 17:43:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:19.757 17:43:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:19.757 17:43:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:19.757 17:43:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.757 17:43:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.757 17:43:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.757 17:43:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.757 17:43:41 -- target/tls.sh@23 -- # psk= 00:18:19.757 17:43:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.757 17:43:41 -- target/tls.sh@28 -- # bdevperf_pid=631456 00:18:19.757 17:43:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.757 17:43:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.757 17:43:41 -- target/tls.sh@31 -- # waitforlisten 631456 /var/tmp/bdevperf.sock 00:18:19.757 17:43:41 -- common/autotest_common.sh@819 -- # '[' -z 631456 ']' 00:18:19.757 17:43:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.757 17:43:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:19.757 17:43:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.757 17:43:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:19.757 17:43:41 -- common/autotest_common.sh@10 -- # set +x 00:18:19.757 [2024-07-24 17:43:41.291924] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:19.757 [2024-07-24 17:43:41.291975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631456 ] 00:18:19.757 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.757 [2024-07-24 17:43:41.343680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.016 [2024-07-24 17:43:41.415357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.583 17:43:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:20.583 17:43:42 -- common/autotest_common.sh@852 -- # return 0 00:18:20.583 17:43:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:20.841 [2024-07-24 17:43:42.233437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:20.841 [2024-07-24 17:43:42.235128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad9740 (9): Bad file descriptor 00:18:20.841 [2024-07-24 17:43:42.236127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.841 [2024-07-24 17:43:42.236136] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:20.841 [2024-07-24 17:43:42.236143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.841 request: 00:18:20.841 { 00:18:20.841 "name": "TLSTEST", 00:18:20.841 "trtype": "tcp", 00:18:20.841 "traddr": "10.0.0.2", 00:18:20.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.841 "adrfam": "ipv4", 00:18:20.841 "trsvcid": "4420", 00:18:20.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.841 "method": "bdev_nvme_attach_controller", 00:18:20.841 "req_id": 1 00:18:20.841 } 00:18:20.841 Got JSON-RPC error response 00:18:20.841 response: 00:18:20.841 { 00:18:20.841 "code": -32602, 00:18:20.841 "message": "Invalid parameters" 00:18:20.841 } 00:18:20.841 17:43:42 -- target/tls.sh@36 -- # killprocess 631456 00:18:20.841 17:43:42 -- common/autotest_common.sh@926 -- # '[' -z 631456 ']' 00:18:20.841 17:43:42 -- common/autotest_common.sh@930 -- # kill -0 631456 00:18:20.841 17:43:42 -- common/autotest_common.sh@931 -- # uname 00:18:20.841 17:43:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.841 17:43:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 631456 00:18:20.841 17:43:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:20.841 17:43:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:20.841 17:43:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 631456' 00:18:20.841 killing process with pid 631456 00:18:20.841 17:43:42 -- common/autotest_common.sh@945 -- # kill 631456 00:18:20.841 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.841 00:18:20.841 Latency(us) 00:18:20.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.841 =================================================================================================================== 00:18:20.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.841 17:43:42 -- common/autotest_common.sh@950 -- # wait 631456 00:18:21.099 17:43:42 -- target/tls.sh@37 -- # return 1 00:18:21.099 17:43:42 -- common/autotest_common.sh@643 -- # es=1 00:18:21.099 17:43:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:21.099 17:43:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:21.099 17:43:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:21.099 17:43:42 -- target/tls.sh@167 -- # killprocess 626446 00:18:21.099 17:43:42 -- common/autotest_common.sh@926 -- # '[' -z 626446 ']' 00:18:21.099 17:43:42 -- common/autotest_common.sh@930 -- # kill -0 626446 00:18:21.099 17:43:42 -- common/autotest_common.sh@931 -- # uname 00:18:21.099 17:43:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:21.099 17:43:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 626446 00:18:21.099 17:43:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:21.099 17:43:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:21.099 17:43:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 626446' 00:18:21.099 killing process with pid 626446 00:18:21.099 17:43:42 -- common/autotest_common.sh@945 -- # kill 626446 00:18:21.099 17:43:42 -- common/autotest_common.sh@950 -- # wait 626446 00:18:21.357 17:43:42 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:18:21.357 17:43:42 -- target/tls.sh@49 -- # local key hash crc 00:18:21.357 17:43:42 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:21.357 17:43:42 -- target/tls.sh@51 -- # hash=02 00:18:21.357 17:43:42 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:18:21.357 17:43:42 -- target/tls.sh@52 -- # gzip -1 -c 00:18:21.357 17:43:42 -- target/tls.sh@52 -- # tail -c8 00:18:21.357 17:43:42 -- target/tls.sh@52 -- # head -c 4 00:18:21.357 17:43:42 -- target/tls.sh@52 -- # crc='�e�'\''' 00:18:21.357 17:43:42 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:21.357 17:43:42 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:18:21.357 17:43:42 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.357 17:43:42 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.357 17:43:42 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:21.357 17:43:42 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.357 17:43:42 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:21.357 17:43:42 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:18:21.357 17:43:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:21.357 17:43:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:21.357 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:18:21.357 17:43:42 -- nvmf/common.sh@469 -- # nvmfpid=631784 00:18:21.357 17:43:42 -- nvmf/common.sh@470 -- # waitforlisten 631784 00:18:21.357 17:43:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.357 17:43:42 -- common/autotest_common.sh@819 -- # '[' -z 631784 ']' 00:18:21.357 17:43:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.357 17:43:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:21.357 17:43:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.357 17:43:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:21.357 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:18:21.357 [2024-07-24 17:43:42.824063] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:21.357 [2024-07-24 17:43:42.824125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.357 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.357 [2024-07-24 17:43:42.880067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.616 [2024-07-24 17:43:42.958314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.616 [2024-07-24 17:43:42.958418] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.616 [2024-07-24 17:43:42.958425] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.616 [2024-07-24 17:43:42.958431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.616 [2024-07-24 17:43:42.958448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.182 17:43:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.182 17:43:43 -- common/autotest_common.sh@852 -- # return 0 00:18:22.182 17:43:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:22.182 17:43:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:22.182 17:43:43 -- common/autotest_common.sh@10 -- # set +x 00:18:22.182 17:43:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.182 17:43:43 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:22.182 17:43:43 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:22.182 17:43:43 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.441 [2024-07-24 17:43:43.798197] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.441 17:43:43 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.441 17:43:43 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.700 [2024-07-24 17:43:44.139083] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.700 [2024-07-24 17:43:44.139263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.700 17:43:44 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.959 malloc0 00:18:22.959 17:43:44 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.959 17:43:44 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:23.218 17:43:44 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:23.218 17:43:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.218 17:43:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.218 17:43:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.218 17:43:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:18:23.218 17:43:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.218 17:43:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.218 17:43:44 -- target/tls.sh@28 -- # bdevperf_pid=632130 00:18:23.218 17:43:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.218 17:43:44 -- target/tls.sh@31 -- # waitforlisten 632130 /var/tmp/bdevperf.sock 00:18:23.218 17:43:44 -- common/autotest_common.sh@819 -- # '[' -z 632130 ']' 00:18:23.218 17:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.218 17:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.218 17:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.218 17:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.218 17:43:44 -- common/autotest_common.sh@10 -- # set +x 00:18:23.218 [2024-07-24 17:43:44.695312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:23.218 [2024-07-24 17:43:44.695365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632130 ] 00:18:23.218 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.218 [2024-07-24 17:43:44.745439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.477 [2024-07-24 17:43:44.817279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.044 17:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:24.044 17:43:45 -- common/autotest_common.sh@852 -- # return 0 00:18:24.044 17:43:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:24.302 [2024-07-24 17:43:45.644187] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.302 TLSTESTn1 00:18:24.302 17:43:45 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.302 Running I/O for 10 seconds... 00:18:36.503 00:18:36.503 Latency(us) 00:18:36.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.503 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.503 Verification LBA range: start 0x0 length 0x2000 00:18:36.503 TLSTESTn1 : 10.04 1424.54 5.56 0.00 0.00 89710.18 6012.22 111696.14 00:18:36.503 =================================================================================================================== 00:18:36.503 Total : 1424.54 5.56 0.00 0.00 89710.18 6012.22 111696.14 00:18:36.503 0 00:18:36.503 17:43:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.503 17:43:55 -- target/tls.sh@45 -- # killprocess 632130 00:18:36.503 17:43:55 -- common/autotest_common.sh@926 -- # '[' -z 632130 ']' 00:18:36.503 17:43:55 -- common/autotest_common.sh@930 -- # kill -0 632130 00:18:36.503 17:43:55 -- common/autotest_common.sh@931 -- # uname 00:18:36.503 17:43:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.503 17:43:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 632130 00:18:36.503 17:43:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:36.503 17:43:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:36.503 17:43:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 632130' 00:18:36.503 killing process with pid 632130 00:18:36.503 17:43:55 -- common/autotest_common.sh@945 -- # kill 632130 00:18:36.503 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.503 00:18:36.503 Latency(us) 00:18:36.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.504 =================================================================================================================== 00:18:36.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.504 17:43:55 -- common/autotest_common.sh@950 -- # wait 632130 00:18:36.504 17:43:56 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:36.504 17:43:56 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:36.504 17:43:56 -- common/autotest_common.sh@640 -- # local es=0 00:18:36.504 17:43:56 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:36.504 17:43:56 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:36.504 17:43:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:36.504 17:43:56 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:36.504 17:43:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:36.504 17:43:56 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:36.504 17:43:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.504 17:43:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.504 17:43:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.504 17:43:56 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:18:36.504 17:43:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.504 17:43:56 -- target/tls.sh@28 -- # bdevperf_pid=633997 00:18:36.504 17:43:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.504 17:43:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.504 17:43:56 -- target/tls.sh@31 -- # waitforlisten 633997 /var/tmp/bdevperf.sock 00:18:36.504 17:43:56 -- common/autotest_common.sh@819 -- # '[' -z 633997 ']' 00:18:36.504 17:43:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.504 17:43:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:36.504 17:43:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.504 17:43:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:36.504 17:43:56 -- common/autotest_common.sh@10 -- # set +x 00:18:36.504 [2024-07-24 17:43:56.211079] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:36.504 [2024-07-24 17:43:56.211128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633997 ] 00:18:36.504 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.504 [2024-07-24 17:43:56.260361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.504 [2024-07-24 17:43:56.325874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.504 17:43:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:36.504 17:43:57 -- common/autotest_common.sh@852 -- # return 0 00:18:36.504 17:43:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:36.504 [2024-07-24 17:43:57.160313] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.504 [2024-07-24 17:43:57.160351] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:36.504 request: 00:18:36.504 { 00:18:36.504 "name": "TLSTEST", 00:18:36.504 "trtype": "tcp", 00:18:36.504 "traddr": "10.0.0.2", 00:18:36.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.504 "adrfam": "ipv4", 00:18:36.504 "trsvcid": "4420", 00:18:36.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.504 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:18:36.504 "method": "bdev_nvme_attach_controller", 00:18:36.504 "req_id": 1 00:18:36.504 } 00:18:36.504 Got JSON-RPC error response 00:18:36.504 response: 00:18:36.504 { 00:18:36.504 "code": -22, 00:18:36.504 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:18:36.504 } 00:18:36.504 17:43:57 -- target/tls.sh@36 -- # killprocess 633997 00:18:36.504 17:43:57 -- common/autotest_common.sh@926 -- # '[' -z 633997 ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@930 -- # kill -0 633997 00:18:36.504 17:43:57 -- common/autotest_common.sh@931 -- # uname 00:18:36.504 17:43:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 633997 00:18:36.504 17:43:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:36.504 17:43:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 633997' 00:18:36.504 killing process with pid 633997 00:18:36.504 17:43:57 -- common/autotest_common.sh@945 -- # kill 633997 00:18:36.504 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.504 00:18:36.504 Latency(us) 00:18:36.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.504 =================================================================================================================== 00:18:36.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.504 17:43:57 -- common/autotest_common.sh@950 -- # wait 633997 00:18:36.504 17:43:57 -- target/tls.sh@37 -- # return 1 00:18:36.504 17:43:57 -- common/autotest_common.sh@643 -- # es=1 00:18:36.504 17:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:36.504 17:43:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:36.504 17:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:36.504 17:43:57 -- target/tls.sh@183 -- # killprocess 631784 00:18:36.504 17:43:57 -- common/autotest_common.sh@926 -- # '[' -z 631784 ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@930 -- # kill -0 631784 00:18:36.504 17:43:57 -- common/autotest_common.sh@931 -- # uname 00:18:36.504 17:43:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 631784 00:18:36.504 17:43:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:36.504 17:43:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 631784' 00:18:36.504 killing process with pid 631784 00:18:36.504 17:43:57 -- common/autotest_common.sh@945 -- # kill 631784 00:18:36.504 17:43:57 -- common/autotest_common.sh@950 -- # wait 631784 00:18:36.504 17:43:57 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:36.504 17:43:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:36.504 17:43:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:36.504 17:43:57 -- common/autotest_common.sh@10 -- # set +x 00:18:36.504 17:43:57 -- nvmf/common.sh@469 -- # nvmfpid=634253 00:18:36.504 17:43:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.504 17:43:57 -- nvmf/common.sh@470 -- # waitforlisten 634253 00:18:36.504 17:43:57 -- common/autotest_common.sh@819 -- # '[' -z 634253 ']' 00:18:36.504 17:43:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.504 17:43:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:36.504 17:43:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.504 17:43:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:36.504 17:43:57 -- common/autotest_common.sh@10 -- # set +x 00:18:36.504 [2024-07-24 17:43:57.718163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:36.504 [2024-07-24 17:43:57.718212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.504 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.504 [2024-07-24 17:43:57.774989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.504 [2024-07-24 17:43:57.840533] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:36.504 [2024-07-24 17:43:57.840657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.504 [2024-07-24 17:43:57.840665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.504 [2024-07-24 17:43:57.840672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.504 [2024-07-24 17:43:57.840688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.112 17:43:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:37.112 17:43:58 -- common/autotest_common.sh@852 -- # return 0 00:18:37.112 17:43:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:37.112 17:43:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:37.112 17:43:58 -- common/autotest_common.sh@10 -- # set +x 00:18:37.112 17:43:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.112 17:43:58 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:37.112 17:43:58 -- common/autotest_common.sh@640 -- # local es=0 00:18:37.112 17:43:58 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:37.112 17:43:58 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:18:37.112 17:43:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:37.112 17:43:58 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:18:37.112 17:43:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:37.112 17:43:58 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:37.112 17:43:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:37.112 17:43:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.112 [2024-07-24 17:43:58.700026] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.371 17:43:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.371 17:43:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.630 [2024-07-24 17:43:59.020867] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.630 [2024-07-24 17:43:59.021051] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.630 17:43:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.630 malloc0 00:18:37.630 17:43:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.888 17:43:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:38.147 [2024-07-24 17:43:59.514186] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:38.147 [2024-07-24 17:43:59.514216] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:38.147 [2024-07-24 17:43:59.514232] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:38.147 request: 00:18:38.147 { 00:18:38.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.147 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.147 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:18:38.147 "method": "nvmf_subsystem_add_host", 00:18:38.147 "req_id": 1 00:18:38.147 } 00:18:38.147 Got JSON-RPC error response 00:18:38.147 response: 00:18:38.147 { 00:18:38.147 "code": -32603, 00:18:38.147 "message": "Internal error" 00:18:38.147 } 00:18:38.147 17:43:59 -- common/autotest_common.sh@643 -- # es=1 00:18:38.147 17:43:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:38.147 17:43:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:38.147 17:43:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:38.147 17:43:59 -- target/tls.sh@189 -- # killprocess 634253 00:18:38.147 17:43:59 -- common/autotest_common.sh@926 -- # '[' -z 634253 ']' 00:18:38.147 17:43:59 -- common/autotest_common.sh@930 -- # kill -0 634253 00:18:38.147 17:43:59 -- common/autotest_common.sh@931 -- # uname 00:18:38.147 17:43:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:38.147 17:43:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 634253 00:18:38.147 17:43:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:38.147 17:43:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:38.147 17:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 634253' 00:18:38.147 killing process with pid 634253 00:18:38.147 17:43:59 -- common/autotest_common.sh@945 -- # kill 634253 00:18:38.147 17:43:59 -- common/autotest_common.sh@950 -- # wait 634253 00:18:38.406 17:43:59 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:38.406 17:43:59 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:38.406 17:43:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:38.406 17:43:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:38.406 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:38.406 17:43:59 -- nvmf/common.sh@469 -- # nvmfpid=634711 00:18:38.406 17:43:59 -- nvmf/common.sh@470 -- # waitforlisten 634711 00:18:38.406 17:43:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.406 17:43:59 -- common/autotest_common.sh@819 -- # '[' -z 634711 ']' 00:18:38.406 17:43:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.406 17:43:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:38.406 17:43:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.406 17:43:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:38.406 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:38.406 [2024-07-24 17:43:59.844286] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:38.406 [2024-07-24 17:43:59.844330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.406 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.406 [2024-07-24 17:43:59.900991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.406 [2024-07-24 17:43:59.977821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:38.406 [2024-07-24 17:43:59.977940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.406 [2024-07-24 17:43:59.977947] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.406 [2024-07-24 17:43:59.977954] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.406 [2024-07-24 17:43:59.977968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.343 17:44:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:39.343 17:44:00 -- common/autotest_common.sh@852 -- # return 0 00:18:39.343 17:44:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:39.343 17:44:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:39.343 17:44:00 -- common/autotest_common.sh@10 -- # set +x 00:18:39.343 17:44:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.343 17:44:00 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:39.343 17:44:00 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:39.343 17:44:00 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.343 [2024-07-24 17:44:00.823818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.343 17:44:00 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.602 17:44:01 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:39.602 [2024-07-24 17:44:01.140646] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.602 [2024-07-24 17:44:01.140821] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.602 17:44:01 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:39.860 malloc0 00:18:39.860 17:44:01 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.120 17:44:01 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:40.120 17:44:01 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.120 17:44:01 -- target/tls.sh@197 -- # bdevperf_pid=635007 00:18:40.120 17:44:01 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.120 17:44:01 -- target/tls.sh@200 -- # waitforlisten 635007 /var/tmp/bdevperf.sock 00:18:40.120 17:44:01 -- common/autotest_common.sh@819 -- # '[' -z 635007 ']' 00:18:40.120 17:44:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.120 17:44:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:40.120 17:44:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.120 17:44:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:40.120 17:44:01 -- common/autotest_common.sh@10 -- # set +x 00:18:40.120 [2024-07-24 17:44:01.678900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:40.120 [2024-07-24 17:44:01.678945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635007 ] 00:18:40.120 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.380 [2024-07-24 17:44:01.727385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.380 [2024-07-24 17:44:01.798205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.948 17:44:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:40.948 17:44:02 -- common/autotest_common.sh@852 -- # return 0 00:18:40.948 17:44:02 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:41.208 [2024-07-24 17:44:02.608270] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.208 TLSTESTn1 00:18:41.208 17:44:02 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:41.467 17:44:02 -- target/tls.sh@205 -- # tgtconf='{ 00:18:41.467 "subsystems": [ 00:18:41.467 { 00:18:41.467 "subsystem": "iobuf", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "iobuf_set_options", 00:18:41.467 "params": { 00:18:41.467 "small_pool_count": 8192, 00:18:41.467 "large_pool_count": 1024, 00:18:41.467 "small_bufsize": 8192, 00:18:41.467 "large_bufsize": 135168 00:18:41.467 } 00:18:41.467 } 00:18:41.467 ] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "sock", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "sock_impl_set_options", 00:18:41.467 "params": { 00:18:41.467 "impl_name": "posix", 00:18:41.467 "recv_buf_size": 2097152, 00:18:41.467 "send_buf_size": 2097152, 00:18:41.467 "enable_recv_pipe": true, 00:18:41.467 "enable_quickack": false, 00:18:41.467 "enable_placement_id": 0, 00:18:41.467 "enable_zerocopy_send_server": true, 00:18:41.467 "enable_zerocopy_send_client": false, 00:18:41.467 "zerocopy_threshold": 0, 00:18:41.467 "tls_version": 0, 00:18:41.467 "enable_ktls": false 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "sock_impl_set_options", 00:18:41.467 "params": { 00:18:41.467 "impl_name": "ssl", 00:18:41.467 "recv_buf_size": 4096, 00:18:41.467 "send_buf_size": 4096, 00:18:41.467 "enable_recv_pipe": true, 00:18:41.467 "enable_quickack": false, 00:18:41.467 "enable_placement_id": 0, 00:18:41.467 "enable_zerocopy_send_server": true, 00:18:41.467 "enable_zerocopy_send_client": false, 00:18:41.467 "zerocopy_threshold": 0, 00:18:41.467 "tls_version": 0, 00:18:41.467 "enable_ktls": false 00:18:41.467 } 00:18:41.467 } 00:18:41.467 ] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "vmd", 00:18:41.467 "config": [] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "accel", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "accel_set_options", 00:18:41.467 "params": { 00:18:41.467 "small_cache_size": 128, 00:18:41.467 "large_cache_size": 16, 00:18:41.467 "task_count": 2048, 00:18:41.467 "sequence_count": 2048, 00:18:41.467 "buf_count": 2048 00:18:41.467 } 00:18:41.467 } 00:18:41.467 ] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "bdev", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "bdev_set_options", 00:18:41.467 "params": { 00:18:41.467 "bdev_io_pool_size": 65535, 00:18:41.467 "bdev_io_cache_size": 256, 00:18:41.467 "bdev_auto_examine": true, 00:18:41.467 "iobuf_small_cache_size": 128, 00:18:41.467 "iobuf_large_cache_size": 16 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_raid_set_options", 00:18:41.467 "params": { 00:18:41.467 "process_window_size_kb": 1024 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_iscsi_set_options", 00:18:41.467 "params": { 00:18:41.467 "timeout_sec": 30 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_nvme_set_options", 00:18:41.467 "params": { 00:18:41.467 "action_on_timeout": "none", 00:18:41.467 "timeout_us": 0, 00:18:41.467 "timeout_admin_us": 0, 00:18:41.467 "keep_alive_timeout_ms": 10000, 00:18:41.467 "transport_retry_count": 4, 00:18:41.467 "arbitration_burst": 0, 00:18:41.467 "low_priority_weight": 0, 00:18:41.467 "medium_priority_weight": 0, 00:18:41.467 "high_priority_weight": 0, 00:18:41.467 "nvme_adminq_poll_period_us": 10000, 00:18:41.467 "nvme_ioq_poll_period_us": 0, 00:18:41.467 "io_queue_requests": 0, 00:18:41.467 "delay_cmd_submit": true, 00:18:41.467 "bdev_retry_count": 3, 00:18:41.467 "transport_ack_timeout": 0, 00:18:41.467 "ctrlr_loss_timeout_sec": 0, 00:18:41.467 "reconnect_delay_sec": 0, 00:18:41.467 "fast_io_fail_timeout_sec": 0, 00:18:41.467 "generate_uuids": false, 00:18:41.467 "transport_tos": 0, 00:18:41.467 "io_path_stat": false, 00:18:41.467 "allow_accel_sequence": false 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_nvme_set_hotplug", 00:18:41.467 "params": { 00:18:41.467 "period_us": 100000, 00:18:41.467 "enable": false 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_malloc_create", 00:18:41.467 "params": { 00:18:41.467 "name": "malloc0", 00:18:41.467 "num_blocks": 8192, 00:18:41.467 "block_size": 4096, 00:18:41.467 "physical_block_size": 4096, 00:18:41.467 "uuid": "9139bd97-cbdd-45e7-aa60-ac43b217696c", 00:18:41.467 "optimal_io_boundary": 0 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "bdev_wait_for_examine" 00:18:41.467 } 00:18:41.467 ] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "nbd", 00:18:41.467 "config": [] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "scheduler", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "framework_set_scheduler", 00:18:41.467 "params": { 00:18:41.467 "name": "static" 00:18:41.467 } 00:18:41.467 } 00:18:41.467 ] 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "subsystem": "nvmf", 00:18:41.467 "config": [ 00:18:41.467 { 00:18:41.467 "method": "nvmf_set_config", 00:18:41.467 "params": { 00:18:41.467 "discovery_filter": "match_any", 00:18:41.467 "admin_cmd_passthru": { 00:18:41.467 "identify_ctrlr": false 00:18:41.467 } 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "nvmf_set_max_subsystems", 00:18:41.467 "params": { 00:18:41.467 "max_subsystems": 1024 00:18:41.467 } 00:18:41.467 }, 00:18:41.467 { 00:18:41.467 "method": "nvmf_set_crdt", 00:18:41.467 "params": { 00:18:41.467 "crdt1": 0, 00:18:41.467 "crdt2": 0, 00:18:41.468 "crdt3": 0 00:18:41.468 } 00:18:41.468 }, 00:18:41.468 { 00:18:41.468 "method": "nvmf_create_transport", 00:18:41.468 "params": { 00:18:41.468 "trtype": "TCP", 00:18:41.468 "max_queue_depth": 128, 00:18:41.468 "max_io_qpairs_per_ctrlr": 127, 00:18:41.468 "in_capsule_data_size": 4096, 00:18:41.468 "max_io_size": 131072, 00:18:41.468 "io_unit_size": 131072, 00:18:41.468 "max_aq_depth": 128, 00:18:41.468 "num_shared_buffers": 511, 00:18:41.468 "buf_cache_size": 4294967295, 00:18:41.468 "dif_insert_or_strip": false, 00:18:41.468 "zcopy": false, 00:18:41.468 "c2h_success": false, 00:18:41.468 "sock_priority": 0, 00:18:41.468 "abort_timeout_sec": 1 00:18:41.468 } 00:18:41.468 }, 00:18:41.468 { 00:18:41.468 "method": "nvmf_create_subsystem", 00:18:41.468 "params": { 00:18:41.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.468 "allow_any_host": false, 00:18:41.468 "serial_number": "SPDK00000000000001", 00:18:41.468 "model_number": "SPDK bdev Controller", 00:18:41.468 "max_namespaces": 10, 00:18:41.468 "min_cntlid": 1, 00:18:41.468 "max_cntlid": 65519, 00:18:41.468 "ana_reporting": false 00:18:41.468 } 00:18:41.468 }, 00:18:41.468 { 00:18:41.468 "method": "nvmf_subsystem_add_host", 00:18:41.468 "params": { 00:18:41.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.468 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.468 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:18:41.468 } 00:18:41.468 }, 00:18:41.468 { 00:18:41.468 "method": "nvmf_subsystem_add_ns", 00:18:41.468 "params": { 00:18:41.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.468 "namespace": { 00:18:41.468 "nsid": 1, 00:18:41.468 "bdev_name": "malloc0", 00:18:41.468 "nguid": "9139BD97CBDD45E7AA60AC43B217696C", 00:18:41.468 "uuid": "9139bd97-cbdd-45e7-aa60-ac43b217696c" 00:18:41.468 } 00:18:41.468 } 00:18:41.468 }, 00:18:41.468 { 00:18:41.468 "method": "nvmf_subsystem_add_listener", 00:18:41.468 "params": { 00:18:41.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.468 "listen_address": { 00:18:41.468 "trtype": "TCP", 00:18:41.468 "adrfam": "IPv4", 00:18:41.468 "traddr": "10.0.0.2", 00:18:41.468 "trsvcid": "4420" 00:18:41.468 }, 00:18:41.468 "secure_channel": true 00:18:41.468 } 00:18:41.468 } 00:18:41.468 ] 00:18:41.468 } 00:18:41.468 ] 00:18:41.468 }' 00:18:41.468 17:44:02 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:41.727 17:44:03 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:41.727 "subsystems": [ 00:18:41.727 { 00:18:41.727 "subsystem": "iobuf", 00:18:41.727 "config": [ 00:18:41.727 { 00:18:41.727 "method": "iobuf_set_options", 00:18:41.727 "params": { 00:18:41.727 "small_pool_count": 8192, 00:18:41.727 "large_pool_count": 1024, 00:18:41.727 "small_bufsize": 8192, 00:18:41.727 "large_bufsize": 135168 00:18:41.727 } 00:18:41.727 } 00:18:41.727 ] 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "subsystem": "sock", 00:18:41.727 "config": [ 00:18:41.727 { 00:18:41.727 "method": "sock_impl_set_options", 00:18:41.727 "params": { 00:18:41.727 "impl_name": "posix", 00:18:41.727 "recv_buf_size": 2097152, 00:18:41.727 "send_buf_size": 2097152, 00:18:41.727 "enable_recv_pipe": true, 00:18:41.727 "enable_quickack": false, 00:18:41.727 "enable_placement_id": 0, 00:18:41.727 "enable_zerocopy_send_server": true, 00:18:41.727 "enable_zerocopy_send_client": false, 00:18:41.727 "zerocopy_threshold": 0, 00:18:41.727 "tls_version": 0, 00:18:41.727 "enable_ktls": false 00:18:41.727 } 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "method": "sock_impl_set_options", 00:18:41.727 "params": { 00:18:41.727 "impl_name": "ssl", 00:18:41.727 "recv_buf_size": 4096, 00:18:41.727 "send_buf_size": 4096, 00:18:41.727 "enable_recv_pipe": true, 00:18:41.727 "enable_quickack": false, 00:18:41.727 "enable_placement_id": 0, 00:18:41.727 "enable_zerocopy_send_server": true, 00:18:41.727 "enable_zerocopy_send_client": false, 00:18:41.727 "zerocopy_threshold": 0, 00:18:41.727 "tls_version": 0, 00:18:41.727 "enable_ktls": false 00:18:41.727 } 00:18:41.727 } 00:18:41.727 ] 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "subsystem": "vmd", 00:18:41.727 "config": [] 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "subsystem": "accel", 00:18:41.727 "config": [ 00:18:41.727 { 00:18:41.727 "method": "accel_set_options", 00:18:41.727 "params": { 00:18:41.727 "small_cache_size": 128, 00:18:41.727 "large_cache_size": 16, 00:18:41.727 "task_count": 2048, 00:18:41.727 "sequence_count": 2048, 00:18:41.727 "buf_count": 2048 00:18:41.727 } 00:18:41.727 } 00:18:41.727 ] 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "subsystem": "bdev", 00:18:41.727 "config": [ 00:18:41.727 { 00:18:41.727 "method": "bdev_set_options", 00:18:41.727 "params": { 00:18:41.727 "bdev_io_pool_size": 65535, 00:18:41.727 "bdev_io_cache_size": 256, 00:18:41.727 "bdev_auto_examine": true, 00:18:41.727 "iobuf_small_cache_size": 128, 00:18:41.727 "iobuf_large_cache_size": 16 00:18:41.727 } 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "method": "bdev_raid_set_options", 00:18:41.727 "params": { 00:18:41.727 "process_window_size_kb": 1024 00:18:41.727 } 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "method": "bdev_iscsi_set_options", 00:18:41.727 "params": { 00:18:41.727 "timeout_sec": 30 00:18:41.727 } 00:18:41.727 }, 00:18:41.727 { 00:18:41.727 "method": "bdev_nvme_set_options", 00:18:41.727 "params": { 00:18:41.727 "action_on_timeout": "none", 00:18:41.727 "timeout_us": 0, 00:18:41.728 "timeout_admin_us": 0, 00:18:41.728 "keep_alive_timeout_ms": 10000, 00:18:41.728 "transport_retry_count": 4, 00:18:41.728 "arbitration_burst": 0, 00:18:41.728 "low_priority_weight": 0, 00:18:41.728 "medium_priority_weight": 0, 00:18:41.728 "high_priority_weight": 0, 00:18:41.728 "nvme_adminq_poll_period_us": 10000, 00:18:41.728 "nvme_ioq_poll_period_us": 0, 00:18:41.728 "io_queue_requests": 512, 00:18:41.728 "delay_cmd_submit": true, 00:18:41.728 "bdev_retry_count": 3, 00:18:41.728 "transport_ack_timeout": 0, 00:18:41.728 "ctrlr_loss_timeout_sec": 0, 00:18:41.728 "reconnect_delay_sec": 0, 00:18:41.728 "fast_io_fail_timeout_sec": 0, 00:18:41.728 "generate_uuids": false, 00:18:41.728 "transport_tos": 0, 00:18:41.728 "io_path_stat": false, 00:18:41.728 "allow_accel_sequence": false 00:18:41.728 } 00:18:41.728 }, 00:18:41.728 { 00:18:41.728 "method": "bdev_nvme_attach_controller", 00:18:41.728 "params": { 00:18:41.728 "name": "TLSTEST", 00:18:41.728 "trtype": "TCP", 00:18:41.728 "adrfam": "IPv4", 00:18:41.728 "traddr": "10.0.0.2", 00:18:41.728 "trsvcid": "4420", 00:18:41.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.728 "prchk_reftag": false, 00:18:41.728 "prchk_guard": false, 00:18:41.728 "ctrlr_loss_timeout_sec": 0, 00:18:41.728 "reconnect_delay_sec": 0, 00:18:41.728 "fast_io_fail_timeout_sec": 0, 00:18:41.728 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:18:41.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.728 "hdgst": false, 00:18:41.728 "ddgst": false 00:18:41.728 } 00:18:41.728 }, 00:18:41.728 { 00:18:41.728 "method": "bdev_nvme_set_hotplug", 00:18:41.728 "params": { 00:18:41.728 "period_us": 100000, 00:18:41.728 "enable": false 00:18:41.728 } 00:18:41.728 }, 00:18:41.728 { 00:18:41.728 "method": "bdev_wait_for_examine" 00:18:41.728 } 00:18:41.728 ] 00:18:41.728 }, 00:18:41.728 { 00:18:41.728 "subsystem": "nbd", 00:18:41.728 "config": [] 00:18:41.728 } 00:18:41.728 ] 00:18:41.728 }' 00:18:41.728 17:44:03 -- target/tls.sh@208 -- # killprocess 635007 00:18:41.728 17:44:03 -- common/autotest_common.sh@926 -- # '[' -z 635007 ']' 00:18:41.728 17:44:03 -- common/autotest_common.sh@930 -- # kill -0 635007 00:18:41.728 17:44:03 -- common/autotest_common.sh@931 -- # uname 00:18:41.728 17:44:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:41.728 17:44:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 635007 00:18:41.728 17:44:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:41.728 17:44:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:41.728 17:44:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 635007' 00:18:41.728 killing process with pid 635007 00:18:41.728 17:44:03 -- common/autotest_common.sh@945 -- # kill 635007 00:18:41.728 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.728 00:18:41.728 Latency(us) 00:18:41.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.728 =================================================================================================================== 00:18:41.728 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.728 17:44:03 -- common/autotest_common.sh@950 -- # wait 635007 00:18:41.987 17:44:03 -- target/tls.sh@209 -- # killprocess 634711 00:18:41.987 17:44:03 -- common/autotest_common.sh@926 -- # '[' -z 634711 ']' 00:18:41.987 17:44:03 -- common/autotest_common.sh@930 -- # kill -0 634711 00:18:41.987 17:44:03 -- common/autotest_common.sh@931 -- # uname 00:18:41.987 17:44:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:41.987 17:44:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 634711 00:18:41.987 17:44:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:41.987 17:44:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:41.987 17:44:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 634711' 00:18:41.987 killing process with pid 634711 00:18:41.987 17:44:03 -- common/autotest_common.sh@945 -- # kill 634711 00:18:41.987 17:44:03 -- common/autotest_common.sh@950 -- # wait 634711 00:18:42.246 17:44:03 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:42.246 17:44:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.246 17:44:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:42.246 17:44:03 -- target/tls.sh@212 -- # echo '{ 00:18:42.246 "subsystems": [ 00:18:42.246 { 00:18:42.246 "subsystem": "iobuf", 00:18:42.246 "config": [ 00:18:42.246 { 00:18:42.246 "method": "iobuf_set_options", 00:18:42.246 "params": { 00:18:42.246 "small_pool_count": 8192, 00:18:42.247 "large_pool_count": 1024, 00:18:42.247 "small_bufsize": 8192, 00:18:42.247 "large_bufsize": 135168 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "sock", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "sock_impl_set_options", 00:18:42.247 "params": { 00:18:42.247 "impl_name": "posix", 00:18:42.247 "recv_buf_size": 2097152, 00:18:42.247 "send_buf_size": 2097152, 00:18:42.247 "enable_recv_pipe": true, 00:18:42.247 "enable_quickack": false, 00:18:42.247 "enable_placement_id": 0, 00:18:42.247 "enable_zerocopy_send_server": true, 00:18:42.247 "enable_zerocopy_send_client": false, 00:18:42.247 "zerocopy_threshold": 0, 00:18:42.247 "tls_version": 0, 00:18:42.247 "enable_ktls": false 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "sock_impl_set_options", 00:18:42.247 "params": { 00:18:42.247 "impl_name": "ssl", 00:18:42.247 "recv_buf_size": 4096, 00:18:42.247 "send_buf_size": 4096, 00:18:42.247 "enable_recv_pipe": true, 00:18:42.247 "enable_quickack": false, 00:18:42.247 "enable_placement_id": 0, 00:18:42.247 "enable_zerocopy_send_server": true, 00:18:42.247 "enable_zerocopy_send_client": false, 00:18:42.247 "zerocopy_threshold": 0, 00:18:42.247 "tls_version": 0, 00:18:42.247 "enable_ktls": false 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "vmd", 00:18:42.247 "config": [] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "accel", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "accel_set_options", 00:18:42.247 "params": { 00:18:42.247 "small_cache_size": 128, 00:18:42.247 "large_cache_size": 16, 00:18:42.247 "task_count": 2048, 00:18:42.247 "sequence_count": 2048, 00:18:42.247 "buf_count": 2048 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "bdev", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "bdev_set_options", 00:18:42.247 "params": { 00:18:42.247 "bdev_io_pool_size": 65535, 00:18:42.247 "bdev_io_cache_size": 256, 00:18:42.247 "bdev_auto_examine": true, 00:18:42.247 "iobuf_small_cache_size": 128, 00:18:42.247 "iobuf_large_cache_size": 16 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_raid_set_options", 00:18:42.247 "params": { 00:18:42.247 "process_window_size_kb": 1024 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_iscsi_set_options", 00:18:42.247 "params": { 00:18:42.247 "timeout_sec": 30 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_nvme_set_options", 00:18:42.247 "params": { 00:18:42.247 "action_on_timeout": "none", 00:18:42.247 "timeout_us": 0, 00:18:42.247 "timeout_admin_us": 0, 00:18:42.247 "keep_alive_timeout_ms": 10000, 00:18:42.247 "transport_retry_count": 4, 00:18:42.247 "arbitration_burst": 0, 00:18:42.247 "low_priority_weight": 0, 00:18:42.247 "medium_priority_weight": 0, 00:18:42.247 "high_priority_weight": 0, 00:18:42.247 "nvme_adminq_poll_period_us": 10000, 00:18:42.247 "nvme_ioq_poll_period_us": 0, 00:18:42.247 "io_queue_requests": 0, 00:18:42.247 "delay_cmd_submit": true, 00:18:42.247 "bdev_retry_count": 3, 00:18:42.247 "transport_ack_timeout": 0, 00:18:42.247 "ctrlr_loss_timeout_sec": 0, 00:18:42.247 "reconnect_delay_sec": 0, 00:18:42.247 "fast_io_fail_timeout_sec": 0, 00:18:42.247 "generate_uuids": false, 00:18:42.247 "transport_tos": 0, 00:18:42.247 "io_path_stat": false, 00:18:42.247 "allow_accel_sequence": false 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_nvme_set_hotplug", 00:18:42.247 "params": { 00:18:42.247 "period_us": 100000, 00:18:42.247 "enable": false 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_malloc_create", 00:18:42.247 "params": { 00:18:42.247 "name": "malloc0", 00:18:42.247 "num_blocks": 8192, 00:18:42.247 "block_size": 4096, 00:18:42.247 "physical_block_size": 4096, 00:18:42.247 "uuid": "9139bd97-cbdd-45e7-aa60-ac43b217696c", 00:18:42.247 "optimal_io_boundary": 0 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "bdev_wait_for_examine" 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "nbd", 00:18:42.247 "config": [] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "scheduler", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "framework_set_scheduler", 00:18:42.247 "params": { 00:18:42.247 "name": "static" 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "nvmf", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "nvmf_set_config", 00:18:42.247 "params": { 00:18:42.247 "discovery_filter": "match_any", 00:18:42.247 "admin_cmd_passthru": { 00:18:42.247 "identify_ctrlr": false 00:18:42.247 } 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "nvmf_set_max_subsystems", 00:18:42.247 "params": { 00:18:42.247 "max_subsystems": 1024 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_set_crdt", 00:18:42.248 "params": { 00:18:42.248 "crdt1": 0, 00:18:42.248 "crdt2": 0, 00:18:42.248 "crdt3": 0 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_create_transport", 00:18:42.248 "params": { 00:18:42.248 "trtype": "TCP", 00:18:42.248 "max_queue_depth": 128, 00:18:42.248 "max_io_qpairs_per_ctrlr": 127, 00:18:42.248 "in_capsule_data_size": 4096, 00:18:42.248 "max_io_size": 131072, 00:18:42.248 "io_unit_size": 131072, 00:18:42.248 "max_aq_depth": 128, 00:18:42.248 "num_shared_buffers": 511, 00:18:42.248 "buf_cache_size": 4294967295, 00:18:42.248 "dif_insert_or_strip": false, 00:18:42.248 "zcopy": false, 00:18:42.248 "c2h_success": false, 00:18:42.248 "sock_priority": 0, 00:18:42.248 "abort_timeout_sec": 1 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_create_subsystem", 00:18:42.248 "params": { 00:18:42.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.248 "allow_any_host": false, 00:18:42.248 "serial_number": "SPDK00000000000001", 00:18:42.248 "model_number": "SPDK bdev Controller", 00:18:42.248 "max_namespaces": 10, 00:18:42.248 "min_cntlid": 1, 00:18:42.248 "max_cntlid": 65519, 00:18:42.248 "ana_reporting": false 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_subsystem_add_host", 00:18:42.248 "params": { 00:18:42.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.248 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.248 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_subsystem_add_ns", 00:18:42.248 "params": { 00:18:42.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.248 "namespace": { 00:18:42.248 "nsid": 1, 00:18:42.248 "bdev_name": "malloc0", 00:18:42.248 "nguid": "9139BD97CBDD45E7AA60AC43B217696C", 00:18:42.248 "uuid": "9139bd97-cbdd-45e7-aa60-ac43b217696c" 00:18:42.248 } 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "nvmf_subsystem_add_listener", 00:18:42.248 "params": { 00:18:42.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.248 "listen_address": { 00:18:42.248 "trtype": "TCP", 00:18:42.248 "adrfam": "IPv4", 00:18:42.248 "traddr": "10.0.0.2", 00:18:42.248 "trsvcid": "4420" 00:18:42.248 }, 00:18:42.248 "secure_channel": true 00:18:42.248 } 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 }' 00:18:42.248 17:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.248 17:44:03 -- nvmf/common.sh@469 -- # nvmfpid=635268 00:18:42.248 17:44:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:42.248 17:44:03 -- nvmf/common.sh@470 -- # waitforlisten 635268 00:18:42.248 17:44:03 -- common/autotest_common.sh@819 -- # '[' -z 635268 ']' 00:18:42.248 17:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.248 17:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.248 17:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.248 17:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.248 17:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.248 [2024-07-24 17:44:03.726901] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:42.248 [2024-07-24 17:44:03.726951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.248 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.248 [2024-07-24 17:44:03.785700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.507 [2024-07-24 17:44:03.862113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.507 [2024-07-24 17:44:03.862237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.507 [2024-07-24 17:44:03.862245] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.507 [2024-07-24 17:44:03.862251] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.507 [2024-07-24 17:44:03.862269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.507 [2024-07-24 17:44:04.057198] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.507 [2024-07-24 17:44:04.089237] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.507 [2024-07-24 17:44:04.089406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.076 17:44:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.076 17:44:04 -- common/autotest_common.sh@852 -- # return 0 00:18:43.076 17:44:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.076 17:44:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:43.076 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:18:43.076 17:44:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.076 17:44:04 -- target/tls.sh@216 -- # bdevperf_pid=635518 00:18:43.076 17:44:04 -- target/tls.sh@217 -- # waitforlisten 635518 /var/tmp/bdevperf.sock 00:18:43.076 17:44:04 -- common/autotest_common.sh@819 -- # '[' -z 635518 ']' 00:18:43.076 17:44:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.076 17:44:04 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:43.076 17:44:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.076 17:44:04 -- target/tls.sh@213 -- # echo '{ 00:18:43.076 "subsystems": [ 00:18:43.076 { 00:18:43.076 "subsystem": "iobuf", 00:18:43.076 "config": [ 00:18:43.076 { 00:18:43.076 "method": "iobuf_set_options", 00:18:43.076 "params": { 00:18:43.076 "small_pool_count": 8192, 00:18:43.076 "large_pool_count": 1024, 00:18:43.076 "small_bufsize": 8192, 00:18:43.076 "large_bufsize": 135168 00:18:43.076 } 00:18:43.076 } 00:18:43.076 ] 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "subsystem": "sock", 00:18:43.076 "config": [ 00:18:43.076 { 00:18:43.076 "method": "sock_impl_set_options", 00:18:43.076 "params": { 00:18:43.076 "impl_name": "posix", 00:18:43.076 "recv_buf_size": 2097152, 00:18:43.076 "send_buf_size": 2097152, 00:18:43.076 "enable_recv_pipe": true, 00:18:43.076 "enable_quickack": false, 00:18:43.076 "enable_placement_id": 0, 00:18:43.076 "enable_zerocopy_send_server": true, 00:18:43.076 "enable_zerocopy_send_client": false, 00:18:43.076 "zerocopy_threshold": 0, 00:18:43.076 "tls_version": 0, 00:18:43.076 "enable_ktls": false 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "sock_impl_set_options", 00:18:43.076 "params": { 00:18:43.076 "impl_name": "ssl", 00:18:43.076 "recv_buf_size": 4096, 00:18:43.076 "send_buf_size": 4096, 00:18:43.076 "enable_recv_pipe": true, 00:18:43.076 "enable_quickack": false, 00:18:43.076 "enable_placement_id": 0, 00:18:43.076 "enable_zerocopy_send_server": true, 00:18:43.076 "enable_zerocopy_send_client": false, 00:18:43.076 "zerocopy_threshold": 0, 00:18:43.076 "tls_version": 0, 00:18:43.076 "enable_ktls": false 00:18:43.076 } 00:18:43.076 } 00:18:43.076 ] 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "subsystem": "vmd", 00:18:43.076 "config": [] 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "subsystem": "accel", 00:18:43.076 "config": [ 00:18:43.076 { 00:18:43.076 "method": "accel_set_options", 00:18:43.076 "params": { 00:18:43.076 "small_cache_size": 128, 00:18:43.076 "large_cache_size": 16, 00:18:43.076 "task_count": 2048, 00:18:43.076 "sequence_count": 2048, 00:18:43.076 "buf_count": 2048 00:18:43.076 } 00:18:43.076 } 00:18:43.076 ] 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "subsystem": "bdev", 00:18:43.076 "config": [ 00:18:43.076 { 00:18:43.076 "method": "bdev_set_options", 00:18:43.076 "params": { 00:18:43.076 "bdev_io_pool_size": 65535, 00:18:43.076 "bdev_io_cache_size": 256, 00:18:43.076 "bdev_auto_examine": true, 00:18:43.076 "iobuf_small_cache_size": 128, 00:18:43.076 "iobuf_large_cache_size": 16 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_raid_set_options", 00:18:43.076 "params": { 00:18:43.076 "process_window_size_kb": 1024 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_iscsi_set_options", 00:18:43.076 "params": { 00:18:43.076 "timeout_sec": 30 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_nvme_set_options", 00:18:43.076 "params": { 00:18:43.076 "action_on_timeout": "none", 00:18:43.076 "timeout_us": 0, 00:18:43.076 "timeout_admin_us": 0, 00:18:43.076 "keep_alive_timeout_ms": 10000, 00:18:43.076 "transport_retry_count": 4, 00:18:43.076 "arbitration_burst": 0, 00:18:43.076 "low_priority_weight": 0, 00:18:43.076 "medium_priority_weight": 0, 00:18:43.076 "high_priority_weight": 0, 00:18:43.076 "nvme_adminq_poll_period_us": 10000, 00:18:43.076 "nvme_ioq_poll_period_us": 0, 00:18:43.076 "io_queue_requests": 512, 00:18:43.076 "delay_cmd_submit": true, 00:18:43.076 "bdev_retry_count": 3, 00:18:43.076 "transport_ack_timeout": 0, 00:18:43.076 "ctrlr_loss_timeout_sec": 0, 00:18:43.076 "reconnect_delay_sec": 0, 00:18:43.076 "fast_io_fail_timeout_sec": 0, 00:18:43.076 "generate_uuids": false, 00:18:43.076 "transport_tos": 0, 00:18:43.076 "io_path_stat": false, 00:18:43.076 "allow_accel_sequence": false 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_nvme_attach_controller", 00:18:43.076 "params": { 00:18:43.076 "name": "TLSTEST", 00:18:43.076 "trtype": "TCP", 00:18:43.076 "adrfam": "IPv4", 00:18:43.076 "traddr": "10.0.0.2", 00:18:43.076 "trsvcid": "4420", 00:18:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.076 "prchk_reftag": false, 00:18:43.076 "prchk_guard": false, 00:18:43.076 "ctrlr_loss_timeout_sec": 0, 00:18:43.076 "reconnect_delay_sec": 0, 00:18:43.076 "fast_io_fail_timeout_sec": 0, 00:18:43.076 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:18:43.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.076 "hdgst": 17:44:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.076 false, 00:18:43.076 "ddgst": false 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_nvme_set_hotplug", 00:18:43.076 "params": { 00:18:43.076 "period_us": 100000, 00:18:43.076 "enable": false 00:18:43.076 } 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "method": "bdev_wait_for_examine" 00:18:43.076 } 00:18:43.076 ] 00:18:43.076 }, 00:18:43.076 { 00:18:43.076 "subsystem": "nbd", 00:18:43.076 "config": [] 00:18:43.076 } 00:18:43.076 ] 00:18:43.076 }' 00:18:43.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.076 17:44:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.076 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:18:43.076 [2024-07-24 17:44:04.601660] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:43.076 [2024-07-24 17:44:04.601706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635518 ] 00:18:43.076 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.077 [2024-07-24 17:44:04.651303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.335 [2024-07-24 17:44:04.723649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.335 [2024-07-24 17:44:04.857142] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.903 17:44:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.903 17:44:05 -- common/autotest_common.sh@852 -- # return 0 00:18:43.903 17:44:05 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.903 Running I/O for 10 seconds... 00:18:56.111 00:18:56.111 Latency(us) 00:18:56.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.111 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.111 Verification LBA range: start 0x0 length 0x2000 00:18:56.111 TLSTESTn1 : 10.04 1427.27 5.58 0.00 0.00 89531.00 7750.34 112607.94 00:18:56.111 =================================================================================================================== 00:18:56.111 Total : 1427.27 5.58 0.00 0.00 89531.00 7750.34 112607.94 00:18:56.111 0 00:18:56.111 17:44:15 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.111 17:44:15 -- target/tls.sh@223 -- # killprocess 635518 00:18:56.111 17:44:15 -- common/autotest_common.sh@926 -- # '[' -z 635518 ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@930 -- # kill -0 635518 00:18:56.111 17:44:15 -- common/autotest_common.sh@931 -- # uname 00:18:56.111 17:44:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 635518 00:18:56.111 17:44:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:56.111 17:44:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 635518' 00:18:56.111 killing process with pid 635518 00:18:56.111 17:44:15 -- common/autotest_common.sh@945 -- # kill 635518 00:18:56.111 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.111 00:18:56.111 Latency(us) 00:18:56.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.111 =================================================================================================================== 00:18:56.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.111 17:44:15 -- common/autotest_common.sh@950 -- # wait 635518 00:18:56.111 17:44:15 -- target/tls.sh@224 -- # killprocess 635268 00:18:56.111 17:44:15 -- common/autotest_common.sh@926 -- # '[' -z 635268 ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@930 -- # kill -0 635268 00:18:56.111 17:44:15 -- common/autotest_common.sh@931 -- # uname 00:18:56.111 17:44:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 635268 00:18:56.111 17:44:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:56.111 17:44:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:56.111 17:44:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 635268' 00:18:56.111 killing process with pid 635268 00:18:56.111 17:44:15 -- common/autotest_common.sh@945 -- # kill 635268 00:18:56.111 17:44:15 -- common/autotest_common.sh@950 -- # wait 635268 00:18:56.111 17:44:16 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:56.111 17:44:16 -- target/tls.sh@227 -- # cleanup 00:18:56.111 17:44:16 -- target/tls.sh@15 -- # process_shm --id 0 00:18:56.111 17:44:16 -- common/autotest_common.sh@796 -- # type=--id 00:18:56.111 17:44:16 -- common/autotest_common.sh@797 -- # id=0 00:18:56.111 17:44:16 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:56.111 17:44:16 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:56.111 17:44:16 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:56.112 17:44:16 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:56.112 17:44:16 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:56.112 17:44:16 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:56.112 nvmf_trace.0 00:18:56.112 17:44:16 -- common/autotest_common.sh@811 -- # return 0 00:18:56.112 17:44:16 -- target/tls.sh@16 -- # killprocess 635518 00:18:56.112 17:44:16 -- common/autotest_common.sh@926 -- # '[' -z 635518 ']' 00:18:56.112 17:44:16 -- common/autotest_common.sh@930 -- # kill -0 635518 00:18:56.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (635518) - No such process 00:18:56.112 17:44:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 635518 is not found' 00:18:56.112 Process with pid 635518 is not found 00:18:56.112 17:44:16 -- target/tls.sh@17 -- # nvmftestfini 00:18:56.112 17:44:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:56.112 17:44:16 -- nvmf/common.sh@116 -- # sync 00:18:56.112 17:44:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:56.112 17:44:16 -- nvmf/common.sh@119 -- # set +e 00:18:56.112 17:44:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:56.112 17:44:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:56.112 rmmod nvme_tcp 00:18:56.112 rmmod nvme_fabrics 00:18:56.112 rmmod nvme_keyring 00:18:56.112 17:44:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:56.112 17:44:16 -- nvmf/common.sh@123 -- # set -e 00:18:56.112 17:44:16 -- nvmf/common.sh@124 -- # return 0 00:18:56.112 17:44:16 -- nvmf/common.sh@477 -- # '[' -n 635268 ']' 00:18:56.112 17:44:16 -- nvmf/common.sh@478 -- # killprocess 635268 00:18:56.112 17:44:16 -- common/autotest_common.sh@926 -- # '[' -z 635268 ']' 00:18:56.112 17:44:16 -- common/autotest_common.sh@930 -- # kill -0 635268 00:18:56.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (635268) - No such process 00:18:56.112 17:44:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 635268 is not found' 00:18:56.112 Process with pid 635268 is not found 00:18:56.112 17:44:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:56.112 17:44:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:56.112 17:44:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:56.112 17:44:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.112 17:44:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:56.112 17:44:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.112 17:44:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.112 17:44:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.681 17:44:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:56.681 17:44:18 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:56.681 00:18:56.681 real 1m12.142s 00:18:56.681 user 1m50.166s 00:18:56.681 sys 0m23.811s 00:18:56.681 17:44:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.681 17:44:18 -- common/autotest_common.sh@10 -- # set +x 00:18:56.681 ************************************ 00:18:56.681 END TEST nvmf_tls 00:18:56.681 ************************************ 00:18:56.941 17:44:18 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.941 17:44:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:56.941 17:44:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:56.941 17:44:18 -- common/autotest_common.sh@10 -- # set +x 00:18:56.941 ************************************ 00:18:56.941 START TEST nvmf_fips 00:18:56.941 ************************************ 00:18:56.941 17:44:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.941 * Looking for test storage... 00:18:56.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:56.941 17:44:18 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.941 17:44:18 -- nvmf/common.sh@7 -- # uname -s 00:18:56.941 17:44:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.941 17:44:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.941 17:44:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.941 17:44:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.941 17:44:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.941 17:44:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.941 17:44:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.941 17:44:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.941 17:44:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.941 17:44:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.941 17:44:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.941 17:44:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.941 17:44:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.941 17:44:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.941 17:44:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.941 17:44:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.941 17:44:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.941 17:44:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.941 17:44:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.941 17:44:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.941 17:44:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.941 17:44:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.941 17:44:18 -- paths/export.sh@5 -- # export PATH 00:18:56.941 17:44:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.941 17:44:18 -- nvmf/common.sh@46 -- # : 0 00:18:56.941 17:44:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:56.941 17:44:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:56.941 17:44:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:56.941 17:44:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.941 17:44:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.941 17:44:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:56.941 17:44:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:56.941 17:44:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:56.941 17:44:18 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.941 17:44:18 -- fips/fips.sh@89 -- # check_openssl_version 00:18:56.941 17:44:18 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:56.941 17:44:18 -- fips/fips.sh@85 -- # openssl version 00:18:56.941 17:44:18 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:56.941 17:44:18 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:56.941 17:44:18 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:56.941 17:44:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:56.941 17:44:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:56.941 17:44:18 -- scripts/common.sh@335 -- # IFS=.-: 00:18:56.941 17:44:18 -- scripts/common.sh@335 -- # read -ra ver1 00:18:56.941 17:44:18 -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.941 17:44:18 -- scripts/common.sh@336 -- # read -ra ver2 00:18:56.941 17:44:18 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:56.941 17:44:18 -- scripts/common.sh@339 -- # ver1_l=3 00:18:56.941 17:44:18 -- scripts/common.sh@340 -- # ver2_l=3 00:18:56.941 17:44:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:56.941 17:44:18 -- scripts/common.sh@343 -- # case "$op" in 00:18:56.941 17:44:18 -- scripts/common.sh@347 -- # : 1 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # decimal 3 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=3 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 3 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # decimal 3 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=3 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 3 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:56.941 17:44:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:56.941 17:44:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v++ )) 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # decimal 0 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=0 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 0 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # ver1[v]=0 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # decimal 0 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=0 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 0 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:56.941 17:44:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:56.941 17:44:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v++ )) 00:18:56.941 17:44:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # decimal 9 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=9 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 9 00:18:56.941 17:44:18 -- scripts/common.sh@364 -- # ver1[v]=9 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # decimal 0 00:18:56.941 17:44:18 -- scripts/common.sh@352 -- # local d=0 00:18:56.941 17:44:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.941 17:44:18 -- scripts/common.sh@354 -- # echo 0 00:18:56.941 17:44:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:56.941 17:44:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:56.941 17:44:18 -- scripts/common.sh@366 -- # return 0 00:18:56.941 17:44:18 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:56.941 17:44:18 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:56.941 17:44:18 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:56.941 17:44:18 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:56.941 17:44:18 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:56.941 17:44:18 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:56.941 17:44:18 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:56.941 17:44:18 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:56.941 17:44:18 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:56.941 17:44:18 -- fips/fips.sh@114 -- # build_openssl_config 00:18:56.941 17:44:18 -- fips/fips.sh@37 -- # cat 00:18:56.941 17:44:18 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:56.941 17:44:18 -- fips/fips.sh@58 -- # cat - 00:18:56.941 17:44:18 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:56.941 17:44:18 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.941 17:44:18 -- fips/fips.sh@117 -- # mapfile -t providers 00:18:56.941 17:44:18 -- fips/fips.sh@117 -- # grep name 00:18:56.941 17:44:18 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.942 17:44:18 -- fips/fips.sh@117 -- # openssl list -providers 00:18:56.942 17:44:18 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:56.942 17:44:18 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:56.942 17:44:18 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:56.942 17:44:18 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:56.942 17:44:18 -- common/autotest_common.sh@640 -- # local es=0 00:18:56.942 17:44:18 -- fips/fips.sh@128 -- # : 00:18:56.942 17:44:18 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:56.942 17:44:18 -- common/autotest_common.sh@628 -- # local arg=openssl 00:18:56.942 17:44:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.942 17:44:18 -- common/autotest_common.sh@632 -- # type -t openssl 00:18:56.942 17:44:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.942 17:44:18 -- common/autotest_common.sh@634 -- # type -P openssl 00:18:56.942 17:44:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.942 17:44:18 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:18:56.942 17:44:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:18:56.942 17:44:18 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:18:57.199 Error setting digest 00:18:57.199 00528E7A8C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:57.199 00528E7A8C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:57.199 17:44:18 -- common/autotest_common.sh@643 -- # es=1 00:18:57.199 17:44:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:57.199 17:44:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:57.199 17:44:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:57.199 17:44:18 -- fips/fips.sh@131 -- # nvmftestinit 00:18:57.199 17:44:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:57.199 17:44:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.199 17:44:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.199 17:44:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.199 17:44:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.199 17:44:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.199 17:44:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.199 17:44:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.199 17:44:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.199 17:44:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.199 17:44:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.199 17:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.474 17:44:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:02.474 17:44:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:02.474 17:44:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:02.474 17:44:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:02.474 17:44:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:02.474 17:44:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:02.474 17:44:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:02.474 17:44:23 -- nvmf/common.sh@294 -- # net_devs=() 00:19:02.474 17:44:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:02.474 17:44:23 -- nvmf/common.sh@295 -- # e810=() 00:19:02.474 17:44:23 -- nvmf/common.sh@295 -- # local -ga e810 00:19:02.474 17:44:23 -- nvmf/common.sh@296 -- # x722=() 00:19:02.474 17:44:23 -- nvmf/common.sh@296 -- # local -ga x722 00:19:02.474 17:44:23 -- nvmf/common.sh@297 -- # mlx=() 00:19:02.474 17:44:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:02.474 17:44:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.474 17:44:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:02.474 17:44:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:02.474 17:44:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:02.474 17:44:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.474 17:44:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:02.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:02.474 17:44:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.474 17:44:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:02.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:02.474 17:44:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:02.474 17:44:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:02.474 17:44:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.474 17:44:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.474 17:44:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.474 17:44:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.474 17:44:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:02.474 Found net devices under 0000:86:00.0: cvl_0_0 00:19:02.474 17:44:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.474 17:44:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.474 17:44:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.474 17:44:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.474 17:44:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.474 17:44:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:02.474 Found net devices under 0000:86:00.1: cvl_0_1 00:19:02.474 17:44:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.474 17:44:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:02.474 17:44:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:02.474 17:44:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:02.475 17:44:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:02.475 17:44:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:02.475 17:44:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.475 17:44:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.475 17:44:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.475 17:44:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:02.475 17:44:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.475 17:44:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.475 17:44:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:02.475 17:44:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.475 17:44:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.475 17:44:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:02.475 17:44:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:02.475 17:44:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.475 17:44:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.475 17:44:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.475 17:44:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.475 17:44:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:02.475 17:44:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.475 17:44:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.475 17:44:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.475 17:44:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:02.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:19:02.475 00:19:02.475 --- 10.0.0.2 ping statistics --- 00:19:02.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.475 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:02.475 17:44:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:19:02.475 00:19:02.475 --- 10.0.0.1 ping statistics --- 00:19:02.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.475 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:02.475 17:44:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.475 17:44:23 -- nvmf/common.sh@410 -- # return 0 00:19:02.475 17:44:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:02.475 17:44:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.475 17:44:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:02.475 17:44:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:02.475 17:44:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.475 17:44:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:02.475 17:44:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:02.475 17:44:23 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:02.475 17:44:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:02.475 17:44:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:02.475 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:19:02.475 17:44:23 -- nvmf/common.sh@469 -- # nvmfpid=640748 00:19:02.475 17:44:23 -- nvmf/common.sh@470 -- # waitforlisten 640748 00:19:02.475 17:44:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.475 17:44:23 -- common/autotest_common.sh@819 -- # '[' -z 640748 ']' 00:19:02.475 17:44:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.475 17:44:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:02.475 17:44:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.475 17:44:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:02.475 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:19:02.475 [2024-07-24 17:44:23.726809] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:02.475 [2024-07-24 17:44:23.726859] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.475 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.475 [2024-07-24 17:44:23.786324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.475 [2024-07-24 17:44:23.859870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:02.475 [2024-07-24 17:44:23.859982] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.475 [2024-07-24 17:44:23.859990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.475 [2024-07-24 17:44:23.859997] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.475 [2024-07-24 17:44:23.860017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.108 17:44:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:03.108 17:44:24 -- common/autotest_common.sh@852 -- # return 0 00:19:03.108 17:44:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:03.108 17:44:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:03.108 17:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:03.108 17:44:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.108 17:44:24 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:03.108 17:44:24 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.108 17:44:24 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:03.108 17:44:24 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.108 17:44:24 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:03.108 17:44:24 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:03.108 17:44:24 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:03.108 17:44:24 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.108 [2024-07-24 17:44:24.699964] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.367 [2024-07-24 17:44:24.715966] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.367 [2024-07-24 17:44:24.716149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.367 malloc0 00:19:03.367 17:44:24 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.367 17:44:24 -- fips/fips.sh@148 -- # bdevperf_pid=640982 00:19:03.367 17:44:24 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.367 17:44:24 -- fips/fips.sh@149 -- # waitforlisten 640982 /var/tmp/bdevperf.sock 00:19:03.367 17:44:24 -- common/autotest_common.sh@819 -- # '[' -z 640982 ']' 00:19:03.367 17:44:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.367 17:44:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:03.367 17:44:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.367 17:44:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:03.367 17:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:03.367 [2024-07-24 17:44:24.829967] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:03.367 [2024-07-24 17:44:24.830014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640982 ] 00:19:03.367 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.367 [2024-07-24 17:44:24.878619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.367 [2024-07-24 17:44:24.948961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.302 17:44:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:04.302 17:44:25 -- common/autotest_common.sh@852 -- # return 0 00:19:04.302 17:44:25 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.302 [2024-07-24 17:44:25.763748] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.302 TLSTESTn1 00:19:04.302 17:44:25 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.560 Running I/O for 10 seconds... 00:19:14.539 00:19:14.539 Latency(us) 00:19:14.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.539 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.539 Verification LBA range: start 0x0 length 0x2000 00:19:14.539 TLSTESTn1 : 10.04 1420.08 5.55 0.00 0.00 90005.57 10998.65 126740.93 00:19:14.539 =================================================================================================================== 00:19:14.539 Total : 1420.08 5.55 0.00 0.00 90005.57 10998.65 126740.93 00:19:14.539 0 00:19:14.539 17:44:36 -- fips/fips.sh@1 -- # cleanup 00:19:14.539 17:44:36 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:14.539 17:44:36 -- common/autotest_common.sh@796 -- # type=--id 00:19:14.539 17:44:36 -- common/autotest_common.sh@797 -- # id=0 00:19:14.539 17:44:36 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:14.539 17:44:36 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:14.539 17:44:36 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:14.539 17:44:36 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:14.539 17:44:36 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:14.539 17:44:36 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:14.539 nvmf_trace.0 00:19:14.539 17:44:36 -- common/autotest_common.sh@811 -- # return 0 00:19:14.539 17:44:36 -- fips/fips.sh@16 -- # killprocess 640982 00:19:14.539 17:44:36 -- common/autotest_common.sh@926 -- # '[' -z 640982 ']' 00:19:14.539 17:44:36 -- common/autotest_common.sh@930 -- # kill -0 640982 00:19:14.539 17:44:36 -- common/autotest_common.sh@931 -- # uname 00:19:14.539 17:44:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.539 17:44:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 640982 00:19:14.798 17:44:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:14.798 17:44:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:14.798 17:44:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 640982' 00:19:14.798 killing process with pid 640982 00:19:14.798 17:44:36 -- common/autotest_common.sh@945 -- # kill 640982 00:19:14.798 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.798 00:19:14.798 Latency(us) 00:19:14.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.798 =================================================================================================================== 00:19:14.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.798 17:44:36 -- common/autotest_common.sh@950 -- # wait 640982 00:19:14.798 17:44:36 -- fips/fips.sh@17 -- # nvmftestfini 00:19:14.798 17:44:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:14.798 17:44:36 -- nvmf/common.sh@116 -- # sync 00:19:14.798 17:44:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:14.798 17:44:36 -- nvmf/common.sh@119 -- # set +e 00:19:14.798 17:44:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:14.798 17:44:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:14.798 rmmod nvme_tcp 00:19:14.798 rmmod nvme_fabrics 00:19:14.798 rmmod nvme_keyring 00:19:15.057 17:44:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:15.057 17:44:36 -- nvmf/common.sh@123 -- # set -e 00:19:15.057 17:44:36 -- nvmf/common.sh@124 -- # return 0 00:19:15.057 17:44:36 -- nvmf/common.sh@477 -- # '[' -n 640748 ']' 00:19:15.057 17:44:36 -- nvmf/common.sh@478 -- # killprocess 640748 00:19:15.057 17:44:36 -- common/autotest_common.sh@926 -- # '[' -z 640748 ']' 00:19:15.057 17:44:36 -- common/autotest_common.sh@930 -- # kill -0 640748 00:19:15.057 17:44:36 -- common/autotest_common.sh@931 -- # uname 00:19:15.057 17:44:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.057 17:44:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 640748 00:19:15.057 17:44:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:15.057 17:44:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:15.057 17:44:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 640748' 00:19:15.057 killing process with pid 640748 00:19:15.057 17:44:36 -- common/autotest_common.sh@945 -- # kill 640748 00:19:15.057 17:44:36 -- common/autotest_common.sh@950 -- # wait 640748 00:19:15.316 17:44:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:15.316 17:44:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:15.316 17:44:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:15.316 17:44:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.316 17:44:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:15.316 17:44:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.316 17:44:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.316 17:44:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.224 17:44:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:17.224 17:44:38 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:17.224 00:19:17.224 real 0m20.443s 00:19:17.224 user 0m23.134s 00:19:17.224 sys 0m8.097s 00:19:17.224 17:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.224 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 ************************************ 00:19:17.224 END TEST nvmf_fips 00:19:17.224 ************************************ 00:19:17.224 17:44:38 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:19:17.224 17:44:38 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:17.224 17:44:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:17.224 17:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.224 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 ************************************ 00:19:17.224 START TEST nvmf_fuzz 00:19:17.224 ************************************ 00:19:17.224 17:44:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:17.483 * Looking for test storage... 00:19:17.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.483 17:44:38 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.483 17:44:38 -- nvmf/common.sh@7 -- # uname -s 00:19:17.483 17:44:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.483 17:44:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.483 17:44:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.483 17:44:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.483 17:44:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.483 17:44:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.483 17:44:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.483 17:44:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.483 17:44:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.483 17:44:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.483 17:44:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.483 17:44:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.483 17:44:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.483 17:44:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.483 17:44:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.483 17:44:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.483 17:44:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.483 17:44:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.483 17:44:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.483 17:44:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.483 17:44:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.483 17:44:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.483 17:44:38 -- paths/export.sh@5 -- # export PATH 00:19:17.483 17:44:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.483 17:44:38 -- nvmf/common.sh@46 -- # : 0 00:19:17.483 17:44:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:17.484 17:44:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:17.484 17:44:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:17.484 17:44:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.484 17:44:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.484 17:44:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:17.484 17:44:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:17.484 17:44:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:17.484 17:44:38 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:17.484 17:44:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:17.484 17:44:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.484 17:44:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:17.484 17:44:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:17.484 17:44:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:17.484 17:44:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.484 17:44:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.484 17:44:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.484 17:44:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:17.484 17:44:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:17.484 17:44:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:17.484 17:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.763 17:44:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:22.763 17:44:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:22.763 17:44:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:22.763 17:44:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:22.763 17:44:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:22.763 17:44:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:22.763 17:44:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:22.763 17:44:44 -- nvmf/common.sh@294 -- # net_devs=() 00:19:22.763 17:44:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:22.763 17:44:44 -- nvmf/common.sh@295 -- # e810=() 00:19:22.763 17:44:44 -- nvmf/common.sh@295 -- # local -ga e810 00:19:22.763 17:44:44 -- nvmf/common.sh@296 -- # x722=() 00:19:22.763 17:44:44 -- nvmf/common.sh@296 -- # local -ga x722 00:19:22.763 17:44:44 -- nvmf/common.sh@297 -- # mlx=() 00:19:22.763 17:44:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:22.763 17:44:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.763 17:44:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:22.763 17:44:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:22.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:22.763 17:44:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:22.763 17:44:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:22.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:22.763 17:44:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:22.763 17:44:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.763 17:44:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.763 17:44:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:22.763 Found net devices under 0000:86:00.0: cvl_0_0 00:19:22.763 17:44:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:22.763 17:44:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.763 17:44:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.763 17:44:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:22.763 Found net devices under 0000:86:00.1: cvl_0_1 00:19:22.763 17:44:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:22.763 17:44:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:22.763 17:44:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.763 17:44:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.763 17:44:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:22.763 17:44:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.763 17:44:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.763 17:44:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:22.763 17:44:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.763 17:44:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.763 17:44:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:22.763 17:44:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:22.763 17:44:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.763 17:44:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.763 17:44:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.763 17:44:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.763 17:44:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:22.763 17:44:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.763 17:44:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.763 17:44:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.763 17:44:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:22.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:19:22.763 00:19:22.763 --- 10.0.0.2 ping statistics --- 00:19:22.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.763 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:22.763 17:44:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:19:22.763 00:19:22.763 --- 10.0.0.1 ping statistics --- 00:19:22.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.763 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:19:22.763 17:44:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.763 17:44:44 -- nvmf/common.sh@410 -- # return 0 00:19:22.763 17:44:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:22.763 17:44:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.763 17:44:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:22.763 17:44:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.763 17:44:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:22.763 17:44:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:22.763 17:44:44 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=646390 00:19:22.763 17:44:44 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:22.763 17:44:44 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:22.763 17:44:44 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 646390 00:19:22.763 17:44:44 -- common/autotest_common.sh@819 -- # '[' -z 646390 ']' 00:19:22.763 17:44:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.763 17:44:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:22.763 17:44:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.763 17:44:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:22.763 17:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 17:44:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:23.701 17:44:45 -- common/autotest_common.sh@852 -- # return 0 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:23.701 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.701 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:19:23.701 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.701 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 Malloc0 00:19:23.701 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.701 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.701 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.701 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.701 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.701 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.701 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.701 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:19:23.701 17:44:45 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:19:55.862 Fuzzing completed. Shutting down the fuzz application 00:19:55.862 00:19:55.862 Dumping successful admin opcodes: 00:19:55.862 8, 9, 10, 24, 00:19:55.862 Dumping successful io opcodes: 00:19:55.862 0, 9, 00:19:55.862 NS: 0x200003aeff00 I/O qp, Total commands completed: 961403, total successful commands: 5622, random_seed: 3736003328 00:19:55.862 NS: 0x200003aeff00 admin qp, Total commands completed: 120329, total successful commands: 985, random_seed: 2009235776 00:19:55.862 17:45:15 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:19:55.862 Fuzzing completed. Shutting down the fuzz application 00:19:55.862 00:19:55.862 Dumping successful admin opcodes: 00:19:55.862 24, 00:19:55.862 Dumping successful io opcodes: 00:19:55.862 00:19:55.862 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2943244512 00:19:55.862 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2943338580 00:19:55.862 17:45:17 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.862 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.862 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:19:55.862 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.862 17:45:17 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:55.862 17:45:17 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:19:55.862 17:45:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.862 17:45:17 -- nvmf/common.sh@116 -- # sync 00:19:55.862 17:45:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.862 17:45:17 -- nvmf/common.sh@119 -- # set +e 00:19:55.862 17:45:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.862 17:45:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.862 rmmod nvme_tcp 00:19:55.862 rmmod nvme_fabrics 00:19:55.862 rmmod nvme_keyring 00:19:55.862 17:45:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.862 17:45:17 -- nvmf/common.sh@123 -- # set -e 00:19:55.862 17:45:17 -- nvmf/common.sh@124 -- # return 0 00:19:55.862 17:45:17 -- nvmf/common.sh@477 -- # '[' -n 646390 ']' 00:19:55.862 17:45:17 -- nvmf/common.sh@478 -- # killprocess 646390 00:19:55.862 17:45:17 -- common/autotest_common.sh@926 -- # '[' -z 646390 ']' 00:19:55.862 17:45:17 -- common/autotest_common.sh@930 -- # kill -0 646390 00:19:55.862 17:45:17 -- common/autotest_common.sh@931 -- # uname 00:19:55.862 17:45:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.862 17:45:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 646390 00:19:55.862 17:45:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.862 17:45:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.862 17:45:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 646390' 00:19:55.862 killing process with pid 646390 00:19:55.862 17:45:17 -- common/autotest_common.sh@945 -- # kill 646390 00:19:55.862 17:45:17 -- common/autotest_common.sh@950 -- # wait 646390 00:19:55.862 17:45:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:55.862 17:45:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.862 17:45:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.862 17:45:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.862 17:45:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.862 17:45:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.862 17:45:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.862 17:45:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.401 17:45:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:58.401 17:45:19 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:19:58.401 00:19:58.401 real 0m40.713s 00:19:58.401 user 0m54.801s 00:19:58.401 sys 0m15.379s 00:19:58.401 17:45:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.401 17:45:19 -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 ************************************ 00:19:58.401 END TEST nvmf_fuzz 00:19:58.401 ************************************ 00:19:58.401 17:45:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:58.401 17:45:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:58.401 17:45:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.401 17:45:19 -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 ************************************ 00:19:58.401 START TEST nvmf_multiconnection 00:19:58.401 ************************************ 00:19:58.401 17:45:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:58.401 * Looking for test storage... 00:19:58.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.401 17:45:19 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.401 17:45:19 -- nvmf/common.sh@7 -- # uname -s 00:19:58.401 17:45:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.401 17:45:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.401 17:45:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.401 17:45:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.401 17:45:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.401 17:45:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.401 17:45:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.401 17:45:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.401 17:45:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.401 17:45:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.401 17:45:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:58.401 17:45:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:58.401 17:45:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.401 17:45:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.401 17:45:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.401 17:45:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.401 17:45:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.401 17:45:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.401 17:45:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.401 17:45:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.401 17:45:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.401 17:45:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.401 17:45:19 -- paths/export.sh@5 -- # export PATH 00:19:58.401 17:45:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.401 17:45:19 -- nvmf/common.sh@46 -- # : 0 00:19:58.401 17:45:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.401 17:45:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.401 17:45:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.401 17:45:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.401 17:45:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.401 17:45:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.401 17:45:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.401 17:45:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.401 17:45:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.401 17:45:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.401 17:45:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:19:58.401 17:45:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:19:58.401 17:45:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:58.401 17:45:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.401 17:45:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.401 17:45:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.401 17:45:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.401 17:45:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.401 17:45:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.401 17:45:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.401 17:45:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:58.401 17:45:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:58.401 17:45:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:58.401 17:45:19 -- common/autotest_common.sh@10 -- # set +x 00:20:03.682 17:45:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:03.682 17:45:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:03.682 17:45:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:03.682 17:45:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:03.682 17:45:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:03.682 17:45:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:03.682 17:45:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:03.682 17:45:24 -- nvmf/common.sh@294 -- # net_devs=() 00:20:03.682 17:45:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:03.682 17:45:24 -- nvmf/common.sh@295 -- # e810=() 00:20:03.682 17:45:24 -- nvmf/common.sh@295 -- # local -ga e810 00:20:03.682 17:45:24 -- nvmf/common.sh@296 -- # x722=() 00:20:03.682 17:45:24 -- nvmf/common.sh@296 -- # local -ga x722 00:20:03.682 17:45:24 -- nvmf/common.sh@297 -- # mlx=() 00:20:03.682 17:45:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:03.682 17:45:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.682 17:45:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:03.682 17:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:03.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:03.682 17:45:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:03.682 17:45:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:03.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:03.682 17:45:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:03.682 17:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.682 17:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.682 17:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:03.682 Found net devices under 0000:86:00.0: cvl_0_0 00:20:03.682 17:45:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:03.682 17:45:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.682 17:45:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.682 17:45:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:03.682 Found net devices under 0000:86:00.1: cvl_0_1 00:20:03.682 17:45:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:03.682 17:45:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:03.682 17:45:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.682 17:45:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.682 17:45:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:03.682 17:45:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.682 17:45:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.682 17:45:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:03.682 17:45:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.682 17:45:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.682 17:45:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:03.682 17:45:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:03.682 17:45:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.682 17:45:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.682 17:45:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.682 17:45:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.682 17:45:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:03.682 17:45:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.682 17:45:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.682 17:45:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.682 17:45:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:03.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:20:03.682 00:20:03.682 --- 10.0.0.2 ping statistics --- 00:20:03.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.682 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:03.682 17:45:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:20:03.682 00:20:03.682 --- 10.0.0.1 ping statistics --- 00:20:03.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.682 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:20:03.682 17:45:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.682 17:45:24 -- nvmf/common.sh@410 -- # return 0 00:20:03.682 17:45:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:03.682 17:45:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.682 17:45:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:03.682 17:45:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.682 17:45:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:03.682 17:45:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:03.682 17:45:24 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:03.682 17:45:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:03.682 17:45:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:03.682 17:45:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.682 17:45:24 -- nvmf/common.sh@469 -- # nvmfpid=655714 00:20:03.682 17:45:24 -- nvmf/common.sh@470 -- # waitforlisten 655714 00:20:03.682 17:45:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.682 17:45:24 -- common/autotest_common.sh@819 -- # '[' -z 655714 ']' 00:20:03.682 17:45:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.682 17:45:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:03.682 17:45:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.682 17:45:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:03.682 17:45:24 -- common/autotest_common.sh@10 -- # set +x 00:20:03.682 [2024-07-24 17:45:24.948519] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:03.683 [2024-07-24 17:45:24.948564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.683 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.683 [2024-07-24 17:45:25.006186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.683 [2024-07-24 17:45:25.078955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:03.683 [2024-07-24 17:45:25.079093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.683 [2024-07-24 17:45:25.079103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.683 [2024-07-24 17:45:25.079110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.683 [2024-07-24 17:45:25.079154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.683 [2024-07-24 17:45:25.079252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.683 [2024-07-24 17:45:25.079318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.683 [2024-07-24 17:45:25.079319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.250 17:45:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:04.250 17:45:25 -- common/autotest_common.sh@852 -- # return 0 00:20:04.250 17:45:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:04.250 17:45:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:04.250 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 17:45:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.250 17:45:25 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.250 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.250 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 [2024-07-24 17:45:25.800353] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.250 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.250 17:45:25 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:04.250 17:45:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.250 17:45:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.250 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.250 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 Malloc1 00:20:04.250 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.250 17:45:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:04.251 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.251 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.251 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.251 17:45:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.251 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.251 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 [2024-07-24 17:45:25.856002] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.510 17:45:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 Malloc2 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.510 17:45:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:04.510 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.510 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 Malloc3 00:20:04.510 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.510 17:45:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.511 17:45:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 Malloc4 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.511 17:45:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:04.511 17:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 Malloc5 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.511 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 Malloc6 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.511 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 Malloc7 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.511 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:04.511 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.511 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.770 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.770 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.770 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:04.770 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.770 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.770 Malloc8 00:20:04.770 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.770 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:04.770 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.770 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.770 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.770 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:04.770 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.770 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.770 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.771 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 Malloc9 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.771 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 Malloc10 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.771 17:45:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 Malloc11 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:04.771 17:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:04.771 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 17:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:04.771 17:45:26 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:04.771 17:45:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:04.771 17:45:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.148 17:45:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:06.148 17:45:27 -- common/autotest_common.sh@1177 -- # local i=0 00:20:06.148 17:45:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:06.148 17:45:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:06.148 17:45:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:08.052 17:45:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:08.052 17:45:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:08.052 17:45:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:20:08.052 17:45:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:08.052 17:45:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:08.052 17:45:29 -- common/autotest_common.sh@1187 -- # return 0 00:20:08.052 17:45:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:08.052 17:45:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:09.429 17:45:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:09.429 17:45:30 -- common/autotest_common.sh@1177 -- # local i=0 00:20:09.429 17:45:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:09.429 17:45:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:09.429 17:45:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:11.331 17:45:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:11.331 17:45:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:11.331 17:45:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:20:11.331 17:45:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:11.331 17:45:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:11.331 17:45:32 -- common/autotest_common.sh@1187 -- # return 0 00:20:11.331 17:45:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:11.331 17:45:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:12.708 17:45:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:12.708 17:45:33 -- common/autotest_common.sh@1177 -- # local i=0 00:20:12.708 17:45:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:12.708 17:45:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:12.708 17:45:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:14.612 17:45:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:14.612 17:45:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:14.612 17:45:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:20:14.612 17:45:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:14.612 17:45:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:14.612 17:45:36 -- common/autotest_common.sh@1187 -- # return 0 00:20:14.612 17:45:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:14.612 17:45:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:15.989 17:45:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:15.989 17:45:37 -- common/autotest_common.sh@1177 -- # local i=0 00:20:15.989 17:45:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:15.989 17:45:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:15.989 17:45:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:17.934 17:45:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:17.935 17:45:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:17.935 17:45:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:20:17.935 17:45:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:17.935 17:45:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:17.935 17:45:39 -- common/autotest_common.sh@1187 -- # return 0 00:20:17.935 17:45:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.935 17:45:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:20:19.312 17:45:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:19.312 17:45:40 -- common/autotest_common.sh@1177 -- # local i=0 00:20:19.312 17:45:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:19.312 17:45:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:19.312 17:45:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:21.216 17:45:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:21.216 17:45:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:21.216 17:45:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:20:21.216 17:45:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:21.216 17:45:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:21.216 17:45:42 -- common/autotest_common.sh@1187 -- # return 0 00:20:21.216 17:45:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:21.216 17:45:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:20:22.594 17:45:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:22.594 17:45:43 -- common/autotest_common.sh@1177 -- # local i=0 00:20:22.594 17:45:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:22.594 17:45:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:22.594 17:45:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:24.498 17:45:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:24.498 17:45:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:24.498 17:45:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:20:24.498 17:45:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:24.498 17:45:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:24.498 17:45:45 -- common/autotest_common.sh@1187 -- # return 0 00:20:24.498 17:45:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:24.498 17:45:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:20:25.875 17:45:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:25.875 17:45:47 -- common/autotest_common.sh@1177 -- # local i=0 00:20:25.875 17:45:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:25.875 17:45:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:25.875 17:45:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:27.781 17:45:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:27.781 17:45:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:27.781 17:45:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:20:27.781 17:45:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:27.781 17:45:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:27.781 17:45:49 -- common/autotest_common.sh@1187 -- # return 0 00:20:27.781 17:45:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:27.781 17:45:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:20:29.160 17:45:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:29.160 17:45:50 -- common/autotest_common.sh@1177 -- # local i=0 00:20:29.160 17:45:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:29.160 17:45:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:29.160 17:45:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:31.063 17:45:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:31.063 17:45:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:31.063 17:45:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:20:31.063 17:45:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:31.063 17:45:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:31.063 17:45:52 -- common/autotest_common.sh@1187 -- # return 0 00:20:31.063 17:45:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.063 17:45:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:20:32.440 17:45:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:20:32.440 17:45:53 -- common/autotest_common.sh@1177 -- # local i=0 00:20:32.440 17:45:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:32.440 17:45:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:32.440 17:45:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:34.973 17:45:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:34.973 17:45:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:34.974 17:45:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:20:34.974 17:45:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:34.974 17:45:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:34.974 17:45:55 -- common/autotest_common.sh@1187 -- # return 0 00:20:34.974 17:45:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:34.974 17:45:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:20:35.911 17:45:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:20:35.911 17:45:57 -- common/autotest_common.sh@1177 -- # local i=0 00:20:35.911 17:45:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:35.911 17:45:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:35.911 17:45:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:38.487 17:45:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:38.487 17:45:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:38.487 17:45:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:20:38.487 17:45:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:38.487 17:45:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.487 17:45:59 -- common/autotest_common.sh@1187 -- # return 0 00:20:38.487 17:45:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:38.487 17:45:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:20:39.422 17:46:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:20:39.422 17:46:00 -- common/autotest_common.sh@1177 -- # local i=0 00:20:39.422 17:46:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:39.422 17:46:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:39.422 17:46:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:41.325 17:46:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:41.325 17:46:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:41.325 17:46:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:20:41.325 17:46:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:41.325 17:46:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:41.325 17:46:02 -- common/autotest_common.sh@1187 -- # return 0 00:20:41.325 17:46:02 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:20:41.325 [global] 00:20:41.325 thread=1 00:20:41.325 invalidate=1 00:20:41.325 rw=read 00:20:41.325 time_based=1 00:20:41.325 runtime=10 00:20:41.325 ioengine=libaio 00:20:41.325 direct=1 00:20:41.325 bs=262144 00:20:41.325 iodepth=64 00:20:41.325 norandommap=1 00:20:41.325 numjobs=1 00:20:41.325 00:20:41.325 [job0] 00:20:41.325 filename=/dev/nvme0n1 00:20:41.325 [job1] 00:20:41.325 filename=/dev/nvme10n1 00:20:41.325 [job2] 00:20:41.325 filename=/dev/nvme1n1 00:20:41.325 [job3] 00:20:41.325 filename=/dev/nvme2n1 00:20:41.325 [job4] 00:20:41.325 filename=/dev/nvme3n1 00:20:41.325 [job5] 00:20:41.325 filename=/dev/nvme4n1 00:20:41.325 [job6] 00:20:41.325 filename=/dev/nvme5n1 00:20:41.325 [job7] 00:20:41.325 filename=/dev/nvme6n1 00:20:41.325 [job8] 00:20:41.325 filename=/dev/nvme7n1 00:20:41.325 [job9] 00:20:41.325 filename=/dev/nvme8n1 00:20:41.325 [job10] 00:20:41.325 filename=/dev/nvme9n1 00:20:41.584 Could not set queue depth (nvme0n1) 00:20:41.584 Could not set queue depth (nvme10n1) 00:20:41.584 Could not set queue depth (nvme1n1) 00:20:41.584 Could not set queue depth (nvme2n1) 00:20:41.584 Could not set queue depth (nvme3n1) 00:20:41.584 Could not set queue depth (nvme4n1) 00:20:41.584 Could not set queue depth (nvme5n1) 00:20:41.584 Could not set queue depth (nvme6n1) 00:20:41.584 Could not set queue depth (nvme7n1) 00:20:41.584 Could not set queue depth (nvme8n1) 00:20:41.584 Could not set queue depth (nvme9n1) 00:20:41.842 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:41.842 fio-3.35 00:20:41.842 Starting 11 threads 00:20:54.051 00:20:54.051 job0: (groupid=0, jobs=1): err= 0: pid=662381: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=629, BW=157MiB/s (165MB/s)(1586MiB/10082msec) 00:20:54.051 slat (usec): min=9, max=94184, avg=1302.65, stdev=4594.80 00:20:54.051 clat (msec): min=9, max=210, avg=100.27, stdev=33.17 00:20:54.051 lat (msec): min=9, max=210, avg=101.57, stdev=33.61 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 27], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 72], 00:20:54.051 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 108], 00:20:54.051 | 70.00th=[ 118], 80.00th=[ 130], 90.00th=[ 146], 95.00th=[ 159], 00:20:54.051 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 205], 00:20:54.051 | 99.99th=[ 211] 00:20:54.051 bw ( KiB/s): min=114688, max=246272, per=7.50%, avg=160755.45, stdev=38482.35, samples=20 00:20:54.051 iops : min= 448, max= 962, avg=627.95, stdev=150.32, samples=20 00:20:54.051 lat (msec) : 10=0.03%, 20=0.57%, 50=2.85%, 100=49.77%, 250=46.78% 00:20:54.051 cpu : usr=0.28%, sys=2.27%, ctx=1630, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=6343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job1: (groupid=0, jobs=1): err= 0: pid=662391: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=679, BW=170MiB/s (178MB/s)(1710MiB/10065msec) 00:20:54.051 slat (usec): min=10, max=83141, avg=1260.37, stdev=3999.57 00:20:54.051 clat (msec): min=5, max=189, avg=92.79, stdev=31.63 00:20:54.051 lat (msec): min=5, max=199, avg=94.05, stdev=32.03 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 21], 5.00th=[ 41], 10.00th=[ 54], 20.00th=[ 67], 00:20:54.051 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 102], 00:20:54.051 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 146], 00:20:54.051 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:20:54.051 | 99.99th=[ 190] 00:20:54.051 bw ( KiB/s): min=113152, max=303104, per=8.09%, avg=173501.20, stdev=44569.46, samples=20 00:20:54.051 iops : min= 442, max= 1184, avg=677.70, stdev=174.12, samples=20 00:20:54.051 lat (msec) : 10=0.16%, 20=0.79%, 50=7.43%, 100=50.27%, 250=41.35% 00:20:54.051 cpu : usr=0.36%, sys=2.67%, ctx=1504, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=6841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job2: (groupid=0, jobs=1): err= 0: pid=662402: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=809, BW=202MiB/s (212MB/s)(2038MiB/10073msec) 00:20:54.051 slat (usec): min=10, max=131342, avg=986.50, stdev=3696.76 00:20:54.051 clat (usec): min=1485, max=232167, avg=78029.93, stdev=37775.05 00:20:54.051 lat (usec): min=1533, max=232221, avg=79016.43, stdev=38235.26 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 42], 00:20:54.051 | 30.00th=[ 51], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 87], 00:20:54.051 | 70.00th=[ 99], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 144], 00:20:54.051 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 203], 99.95th=[ 207], 00:20:54.051 | 99.99th=[ 232] 00:20:54.051 bw ( KiB/s): min=112128, max=440320, per=9.66%, avg=207001.60, stdev=86355.56, samples=20 00:20:54.051 iops : min= 438, max= 1720, avg=808.60, stdev=337.33, samples=20 00:20:54.051 lat (msec) : 2=0.02%, 4=0.12%, 10=0.69%, 20=1.68%, 50=27.04% 00:20:54.051 lat (msec) : 100=41.69%, 250=28.75% 00:20:54.051 cpu : usr=0.40%, sys=3.16%, ctx=1920, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=8150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job3: (groupid=0, jobs=1): err= 0: pid=662405: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=724, BW=181MiB/s (190MB/s)(1828MiB/10096msec) 00:20:54.051 slat (usec): min=9, max=217905, avg=1217.10, stdev=5264.04 00:20:54.051 clat (msec): min=17, max=380, avg=87.03, stdev=46.57 00:20:54.051 lat (msec): min=17, max=380, avg=88.25, stdev=47.19 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 43], 00:20:54.051 | 30.00th=[ 51], 40.00th=[ 66], 50.00th=[ 86], 60.00th=[ 96], 00:20:54.051 | 70.00th=[ 109], 80.00th=[ 123], 90.00th=[ 138], 95.00th=[ 163], 00:20:54.051 | 99.00th=[ 262], 99.50th=[ 292], 99.90th=[ 330], 99.95th=[ 330], 00:20:54.051 | 99.99th=[ 380] 00:20:54.051 bw ( KiB/s): min=42496, max=349696, per=8.65%, avg=185522.05, stdev=89011.46, samples=20 00:20:54.051 iops : min= 166, max= 1366, avg=724.60, stdev=347.76, samples=20 00:20:54.051 lat (msec) : 20=0.19%, 50=28.68%, 100=34.80%, 250=35.06%, 500=1.27% 00:20:54.051 cpu : usr=0.33%, sys=2.69%, ctx=1631, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=7311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job4: (groupid=0, jobs=1): err= 0: pid=662410: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=707, BW=177MiB/s (185MB/s)(1778MiB/10051msec) 00:20:54.051 slat (usec): min=8, max=74196, avg=934.28, stdev=4088.47 00:20:54.051 clat (msec): min=6, max=194, avg=89.42, stdev=37.31 00:20:54.051 lat (msec): min=6, max=225, avg=90.35, stdev=37.66 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 53], 00:20:54.051 | 30.00th=[ 68], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 103], 00:20:54.051 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 136], 95.00th=[ 148], 00:20:54.051 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 188], 00:20:54.051 | 99.99th=[ 194] 00:20:54.051 bw ( KiB/s): min=133632, max=287232, per=8.41%, avg=180373.15, stdev=48693.20, samples=20 00:20:54.051 iops : min= 522, max= 1122, avg=704.50, stdev=190.26, samples=20 00:20:54.051 lat (msec) : 10=0.13%, 20=1.35%, 50=16.68%, 100=39.66%, 250=42.18% 00:20:54.051 cpu : usr=0.26%, sys=2.20%, ctx=1915, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=7110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job5: (groupid=0, jobs=1): err= 0: pid=662430: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=976, BW=244MiB/s (256MB/s)(2462MiB/10087msec) 00:20:54.051 slat (usec): min=7, max=131282, avg=705.93, stdev=3827.33 00:20:54.051 clat (usec): min=1839, max=239079, avg=64756.65, stdev=39930.22 00:20:54.051 lat (usec): min=1889, max=286730, avg=65462.58, stdev=40373.65 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 34], 00:20:54.051 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 48], 60.00th=[ 64], 00:20:54.051 | 70.00th=[ 84], 80.00th=[ 102], 90.00th=[ 128], 95.00th=[ 144], 00:20:54.051 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 209], 00:20:54.051 | 99.99th=[ 241] 00:20:54.051 bw ( KiB/s): min=133632, max=475136, per=11.68%, avg=250467.45, stdev=98580.53, samples=20 00:20:54.051 iops : min= 522, max= 1856, avg=978.35, stdev=385.11, samples=20 00:20:54.051 lat (msec) : 2=0.01%, 10=1.33%, 20=4.50%, 50=46.55%, 100=27.06% 00:20:54.051 lat (msec) : 250=20.55% 00:20:54.051 cpu : usr=0.51%, sys=3.20%, ctx=2498, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=9848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job6: (groupid=0, jobs=1): err= 0: pid=662440: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=743, BW=186MiB/s (195MB/s)(1868MiB/10053msec) 00:20:54.051 slat (usec): min=10, max=144267, avg=1198.46, stdev=4869.82 00:20:54.051 clat (msec): min=6, max=483, avg=84.80, stdev=53.96 00:20:54.051 lat (msec): min=6, max=483, avg=86.00, stdev=54.62 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 44], 00:20:54.051 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 74], 60.00th=[ 87], 00:20:54.051 | 70.00th=[ 102], 80.00th=[ 122], 90.00th=[ 140], 95.00th=[ 169], 00:20:54.051 | 99.00th=[ 284], 99.50th=[ 405], 99.90th=[ 477], 99.95th=[ 481], 00:20:54.051 | 99.99th=[ 485] 00:20:54.051 bw ( KiB/s): min=49152, max=348160, per=8.84%, avg=189614.45, stdev=90089.41, samples=20 00:20:54.051 iops : min= 192, max= 1360, avg=740.60, stdev=351.95, samples=20 00:20:54.051 lat (msec) : 10=0.15%, 20=1.11%, 50=28.86%, 100=39.45%, 250=28.83% 00:20:54.051 lat (msec) : 500=1.61% 00:20:54.051 cpu : usr=0.32%, sys=2.58%, ctx=1660, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=7471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job7: (groupid=0, jobs=1): err= 0: pid=662448: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=698, BW=175MiB/s (183MB/s)(1759MiB/10070msec) 00:20:54.051 slat (usec): min=10, max=101029, avg=1323.52, stdev=4332.55 00:20:54.051 clat (msec): min=8, max=252, avg=90.19, stdev=29.78 00:20:54.051 lat (msec): min=8, max=252, avg=91.51, stdev=30.22 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 21], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 64], 00:20:54.051 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 100], 00:20:54.051 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 128], 95.00th=[ 136], 00:20:54.051 | 99.00th=[ 155], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 215], 00:20:54.051 | 99.99th=[ 253] 00:20:54.051 bw ( KiB/s): min=130048, max=261632, per=8.32%, avg=178426.75, stdev=32420.99, samples=20 00:20:54.051 iops : min= 508, max= 1022, avg=696.90, stdev=126.71, samples=20 00:20:54.051 lat (msec) : 10=0.09%, 20=0.91%, 50=8.79%, 100=51.32%, 250=38.88% 00:20:54.051 lat (msec) : 500=0.01% 00:20:54.051 cpu : usr=0.29%, sys=2.57%, ctx=1568, majf=0, minf=3347 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=7034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job8: (groupid=0, jobs=1): err= 0: pid=662473: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=774, BW=194MiB/s (203MB/s)(1951MiB/10078msec) 00:20:54.051 slat (usec): min=7, max=86650, avg=845.09, stdev=3584.39 00:20:54.051 clat (msec): min=2, max=184, avg=81.71, stdev=36.11 00:20:54.051 lat (msec): min=2, max=243, avg=82.55, stdev=36.57 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 47], 00:20:54.051 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 82], 60.00th=[ 91], 00:20:54.051 | 70.00th=[ 106], 80.00th=[ 116], 90.00th=[ 129], 95.00th=[ 144], 00:20:54.051 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 184], 00:20:54.051 | 99.99th=[ 184] 00:20:54.051 bw ( KiB/s): min=130560, max=311808, per=9.24%, avg=198145.95, stdev=52483.74, samples=20 00:20:54.051 iops : min= 510, max= 1218, avg=773.95, stdev=205.06, samples=20 00:20:54.051 lat (msec) : 4=0.04%, 10=0.36%, 20=2.04%, 50=20.50%, 100=42.95% 00:20:54.051 lat (msec) : 250=34.12% 00:20:54.051 cpu : usr=0.22%, sys=2.60%, ctx=2228, majf=0, minf=4097 00:20:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.051 issued rwts: total=7803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.051 job9: (groupid=0, jobs=1): err= 0: pid=662485: Wed Jul 24 17:46:13 2024 00:20:54.051 read: IOPS=915, BW=229MiB/s (240MB/s)(2307MiB/10078msec) 00:20:54.051 slat (usec): min=8, max=81906, avg=912.79, stdev=3012.66 00:20:54.051 clat (msec): min=12, max=189, avg=68.92, stdev=30.54 00:20:54.051 lat (msec): min=12, max=189, avg=69.83, stdev=30.89 00:20:54.051 clat percentiles (msec): 00:20:54.051 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:20:54.051 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 67], 60.00th=[ 73], 00:20:54.051 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 112], 95.00th=[ 131], 00:20:54.051 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 176], 00:20:54.051 | 99.99th=[ 190] 00:20:54.051 bw ( KiB/s): min=128000, max=449536, per=10.95%, avg=234650.70, stdev=80820.38, samples=20 00:20:54.051 iops : min= 500, max= 1756, avg=916.60, stdev=315.70, samples=20 00:20:54.052 lat (msec) : 20=0.80%, 50=30.98%, 100=53.58%, 250=14.64% 00:20:54.052 cpu : usr=0.41%, sys=3.43%, ctx=2163, majf=0, minf=4097 00:20:54.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.052 issued rwts: total=9229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.052 job10: (groupid=0, jobs=1): err= 0: pid=662495: Wed Jul 24 17:46:13 2024 00:20:54.052 read: IOPS=736, BW=184MiB/s (193MB/s)(1851MiB/10052msec) 00:20:54.052 slat (usec): min=9, max=73252, avg=1083.24, stdev=3982.45 00:20:54.052 clat (msec): min=7, max=232, avg=85.69, stdev=35.81 00:20:54.052 lat (msec): min=7, max=232, avg=86.78, stdev=36.32 00:20:54.052 clat percentiles (msec): 00:20:54.052 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 54], 00:20:54.052 | 30.00th=[ 64], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 95], 00:20:54.052 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 133], 95.00th=[ 146], 00:20:54.052 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 188], 00:20:54.052 | 99.99th=[ 234] 00:20:54.052 bw ( KiB/s): min=111616, max=343040, per=8.77%, avg=187916.65, stdev=62004.71, samples=20 00:20:54.052 iops : min= 436, max= 1340, avg=734.00, stdev=242.26, samples=20 00:20:54.052 lat (msec) : 10=0.07%, 20=2.47%, 50=15.17%, 100=46.46%, 250=35.83% 00:20:54.052 cpu : usr=0.32%, sys=2.79%, ctx=1851, majf=0, minf=4097 00:20:54.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:54.052 issued rwts: total=7404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:54.052 00:20:54.052 Run status group 0 (all jobs): 00:20:54.052 READ: bw=2094MiB/s (2195MB/s), 157MiB/s-244MiB/s (165MB/s-256MB/s), io=20.6GiB (22.2GB), run=10051-10096msec 00:20:54.052 00:20:54.052 Disk stats (read/write): 00:20:54.052 nvme0n1: ios=12469/0, merge=0/0, ticks=1228120/0, in_queue=1228120, util=97.19% 00:20:54.052 nvme10n1: ios=13509/0, merge=0/0, ticks=1227651/0, in_queue=1227651, util=97.38% 00:20:54.052 nvme1n1: ios=16096/0, merge=0/0, ticks=1226382/0, in_queue=1226382, util=97.63% 00:20:54.052 nvme2n1: ios=14402/0, merge=0/0, ticks=1218878/0, in_queue=1218878, util=97.82% 00:20:54.052 nvme3n1: ios=13951/0, merge=0/0, ticks=1237553/0, in_queue=1237553, util=97.88% 00:20:54.052 nvme4n1: ios=19483/0, merge=0/0, ticks=1235867/0, in_queue=1235867, util=98.21% 00:20:54.052 nvme5n1: ios=14697/0, merge=0/0, ticks=1230260/0, in_queue=1230260, util=98.34% 00:20:54.052 nvme6n1: ios=13848/0, merge=0/0, ticks=1223168/0, in_queue=1223168, util=98.51% 00:20:54.052 nvme7n1: ios=15391/0, merge=0/0, ticks=1234527/0, in_queue=1234527, util=98.89% 00:20:54.052 nvme8n1: ios=18264/0, merge=0/0, ticks=1229874/0, in_queue=1229874, util=99.05% 00:20:54.052 nvme9n1: ios=14626/0, merge=0/0, ticks=1231632/0, in_queue=1231632, util=99.19% 00:20:54.052 17:46:13 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:20:54.052 [global] 00:20:54.052 thread=1 00:20:54.052 invalidate=1 00:20:54.052 rw=randwrite 00:20:54.052 time_based=1 00:20:54.052 runtime=10 00:20:54.052 ioengine=libaio 00:20:54.052 direct=1 00:20:54.052 bs=262144 00:20:54.052 iodepth=64 00:20:54.052 norandommap=1 00:20:54.052 numjobs=1 00:20:54.052 00:20:54.052 [job0] 00:20:54.052 filename=/dev/nvme0n1 00:20:54.052 [job1] 00:20:54.052 filename=/dev/nvme10n1 00:20:54.052 [job2] 00:20:54.052 filename=/dev/nvme1n1 00:20:54.052 [job3] 00:20:54.052 filename=/dev/nvme2n1 00:20:54.052 [job4] 00:20:54.052 filename=/dev/nvme3n1 00:20:54.052 [job5] 00:20:54.052 filename=/dev/nvme4n1 00:20:54.052 [job6] 00:20:54.052 filename=/dev/nvme5n1 00:20:54.052 [job7] 00:20:54.052 filename=/dev/nvme6n1 00:20:54.052 [job8] 00:20:54.052 filename=/dev/nvme7n1 00:20:54.052 [job9] 00:20:54.052 filename=/dev/nvme8n1 00:20:54.052 [job10] 00:20:54.052 filename=/dev/nvme9n1 00:20:54.052 Could not set queue depth (nvme0n1) 00:20:54.052 Could not set queue depth (nvme10n1) 00:20:54.052 Could not set queue depth (nvme1n1) 00:20:54.052 Could not set queue depth (nvme2n1) 00:20:54.052 Could not set queue depth (nvme3n1) 00:20:54.052 Could not set queue depth (nvme4n1) 00:20:54.052 Could not set queue depth (nvme5n1) 00:20:54.052 Could not set queue depth (nvme6n1) 00:20:54.052 Could not set queue depth (nvme7n1) 00:20:54.052 Could not set queue depth (nvme8n1) 00:20:54.052 Could not set queue depth (nvme9n1) 00:20:54.052 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:54.052 fio-3.35 00:20:54.052 Starting 11 threads 00:21:04.040 00:21:04.040 job0: (groupid=0, jobs=1): err= 0: pid=664171: Wed Jul 24 17:46:24 2024 00:21:04.040 write: IOPS=410, BW=103MiB/s (108MB/s)(1042MiB/10153msec); 0 zone resets 00:21:04.040 slat (usec): min=28, max=850771, avg=1970.69, stdev=15645.14 00:21:04.040 clat (msec): min=16, max=1758, avg=153.85, stdev=181.51 00:21:04.040 lat (msec): min=16, max=1761, avg=155.82, stdev=182.81 00:21:04.040 clat percentiles (msec): 00:21:04.040 | 1.00th=[ 29], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 84], 00:21:04.040 | 30.00th=[ 109], 40.00th=[ 124], 50.00th=[ 133], 60.00th=[ 146], 00:21:04.040 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 226], 00:21:04.040 | 99.00th=[ 1452], 99.50th=[ 1703], 99.90th=[ 1754], 99.95th=[ 1754], 00:21:04.040 | 99.99th=[ 1754] 00:21:04.040 bw ( KiB/s): min= 2048, max=215040, per=8.73%, avg=105088.00, stdev=53164.73, samples=20 00:21:04.040 iops : min= 8, max= 840, avg=410.50, stdev=207.67, samples=20 00:21:04.040 lat (msec) : 20=0.02%, 50=3.93%, 100=21.79%, 250=70.23%, 500=1.97% 00:21:04.040 lat (msec) : 750=0.55%, 2000=1.51% 00:21:04.040 cpu : usr=1.30%, sys=1.25%, ctx=1878, majf=0, minf=1 00:21:04.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:04.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.040 issued rwts: total=0,4168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.040 job1: (groupid=0, jobs=1): err= 0: pid=664172: Wed Jul 24 17:46:24 2024 00:21:04.040 write: IOPS=449, BW=112MiB/s (118MB/s)(1142MiB/10162msec); 0 zone resets 00:21:04.040 slat (usec): min=21, max=88377, avg=1581.40, stdev=4799.94 00:21:04.040 clat (msec): min=4, max=491, avg=140.41, stdev=60.57 00:21:04.040 lat (msec): min=4, max=491, avg=141.99, stdev=61.28 00:21:04.040 clat percentiles (msec): 00:21:04.040 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 69], 20.00th=[ 93], 00:21:04.040 | 30.00th=[ 112], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 146], 00:21:04.040 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 213], 95.00th=[ 236], 00:21:04.040 | 99.00th=[ 347], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 439], 00:21:04.040 | 99.99th=[ 493] 00:21:04.040 bw ( KiB/s): min=52736, max=172032, per=9.57%, avg=115251.20, stdev=30343.98, samples=20 00:21:04.040 iops : min= 206, max= 672, avg=450.20, stdev=118.53, samples=20 00:21:04.040 lat (msec) : 10=0.07%, 20=0.68%, 50=5.54%, 100=19.10%, 250=70.48% 00:21:04.040 lat (msec) : 500=4.14% 00:21:04.040 cpu : usr=1.18%, sys=1.18%, ctx=2391, majf=0, minf=1 00:21:04.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job2: (groupid=0, jobs=1): err= 0: pid=664184: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=380, BW=95.2MiB/s (99.8MB/s)(982MiB/10310msec); 0 zone resets 00:21:04.041 slat (usec): min=18, max=537913, avg=2031.78, stdev=10976.85 00:21:04.041 clat (msec): min=3, max=860, avg=165.88, stdev=149.80 00:21:04.041 lat (msec): min=3, max=860, avg=167.91, stdev=151.42 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 27], 5.00th=[ 46], 10.00th=[ 61], 20.00th=[ 74], 00:21:04.041 | 30.00th=[ 88], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 129], 00:21:04.041 | 70.00th=[ 155], 80.00th=[ 197], 90.00th=[ 380], 95.00th=[ 567], 00:21:04.041 | 99.00th=[ 709], 99.50th=[ 751], 99.90th=[ 827], 99.95th=[ 860], 00:21:04.041 | 99.99th=[ 860] 00:21:04.041 bw ( KiB/s): min=28672, max=199680, per=8.21%, avg=98867.20, stdev=54226.45, samples=20 00:21:04.041 iops : min= 112, max= 780, avg=386.20, stdev=211.82, samples=20 00:21:04.041 lat (msec) : 4=0.03%, 10=0.10%, 20=0.25%, 50=6.70%, 100=34.39% 00:21:04.041 lat (msec) : 250=41.52%, 500=9.65%, 750=6.93%, 1000=0.43% 00:21:04.041 cpu : usr=1.05%, sys=1.13%, ctx=1838, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job3: (groupid=0, jobs=1): err= 0: pid=664185: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=360, BW=90.1MiB/s (94.5MB/s)(930MiB/10312msec); 0 zone resets 00:21:04.041 slat (usec): min=21, max=80583, avg=2546.96, stdev=6323.85 00:21:04.041 clat (msec): min=6, max=863, avg=174.86, stdev=124.42 00:21:04.041 lat (msec): min=6, max=863, avg=177.40, stdev=125.97 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 59], 5.00th=[ 67], 10.00th=[ 73], 20.00th=[ 99], 00:21:04.041 | 30.00th=[ 113], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 155], 00:21:04.041 | 70.00th=[ 174], 80.00th=[ 207], 90.00th=[ 309], 95.00th=[ 514], 00:21:04.041 | 99.00th=[ 600], 99.50th=[ 693], 99.90th=[ 827], 99.95th=[ 860], 00:21:04.041 | 99.99th=[ 860] 00:21:04.041 bw ( KiB/s): min=26624, max=223744, per=7.77%, avg=93517.15, stdev=49214.33, samples=20 00:21:04.041 iops : min= 104, max= 874, avg=365.25, stdev=192.20, samples=20 00:21:04.041 lat (msec) : 10=0.03%, 20=0.16%, 50=0.48%, 100=20.60%, 250=63.58% 00:21:04.041 lat (msec) : 500=10.03%, 750=4.73%, 1000=0.38% 00:21:04.041 cpu : usr=0.84%, sys=1.17%, ctx=1147, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,3718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job4: (groupid=0, jobs=1): err= 0: pid=664186: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=411, BW=103MiB/s (108MB/s)(1048MiB/10197msec); 0 zone resets 00:21:04.041 slat (usec): min=20, max=100329, avg=1967.44, stdev=5125.87 00:21:04.041 clat (msec): min=2, max=517, avg=153.60, stdev=71.27 00:21:04.041 lat (msec): min=2, max=517, avg=155.57, stdev=72.16 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 78], 20.00th=[ 97], 00:21:04.041 | 30.00th=[ 115], 40.00th=[ 140], 50.00th=[ 157], 60.00th=[ 169], 00:21:04.041 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 271], 00:21:04.041 | 99.00th=[ 409], 99.50th=[ 422], 99.90th=[ 498], 99.95th=[ 498], 00:21:04.041 | 99.99th=[ 518] 00:21:04.041 bw ( KiB/s): min=45056, max=165888, per=8.78%, avg=105713.45, stdev=28548.39, samples=20 00:21:04.041 iops : min= 176, max= 648, avg=412.90, stdev=111.51, samples=20 00:21:04.041 lat (msec) : 4=0.10%, 10=0.64%, 20=0.81%, 50=4.05%, 100=17.98% 00:21:04.041 lat (msec) : 250=70.59%, 500=5.77%, 750=0.05% 00:21:04.041 cpu : usr=1.16%, sys=1.34%, ctx=1875, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,4193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job5: (groupid=0, jobs=1): err= 0: pid=664187: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=438, BW=110MiB/s (115MB/s)(1103MiB/10060msec); 0 zone resets 00:21:04.041 slat (usec): min=20, max=99784, avg=1757.24, stdev=5708.28 00:21:04.041 clat (msec): min=5, max=462, avg=143.49, stdev=78.87 00:21:04.041 lat (msec): min=5, max=462, avg=145.25, stdev=79.95 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 66], 20.00th=[ 84], 00:21:04.041 | 30.00th=[ 97], 40.00th=[ 109], 50.00th=[ 130], 60.00th=[ 146], 00:21:04.041 | 70.00th=[ 167], 80.00th=[ 201], 90.00th=[ 247], 95.00th=[ 292], 00:21:04.041 | 99.00th=[ 405], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:21:04.041 | 99.99th=[ 464] 00:21:04.041 bw ( KiB/s): min=51815, max=206336, per=9.24%, avg=111313.95, stdev=42641.69, samples=20 00:21:04.041 iops : min= 202, max= 806, avg=434.80, stdev=166.60, samples=20 00:21:04.041 lat (msec) : 10=0.50%, 20=2.34%, 50=3.02%, 100=26.25%, 250=58.60% 00:21:04.041 lat (msec) : 500=9.29% 00:21:04.041 cpu : usr=1.11%, sys=1.19%, ctx=2144, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,4411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job6: (groupid=0, jobs=1): err= 0: pid=664188: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=511, BW=128MiB/s (134MB/s)(1302MiB/10172msec); 0 zone resets 00:21:04.041 slat (usec): min=24, max=88082, avg=1666.14, stdev=4277.59 00:21:04.041 clat (msec): min=14, max=325, avg=123.33, stdev=50.83 00:21:04.041 lat (msec): min=16, max=325, avg=125.00, stdev=51.36 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 39], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 80], 00:21:04.041 | 30.00th=[ 90], 40.00th=[ 100], 50.00th=[ 115], 60.00th=[ 129], 00:21:04.041 | 70.00th=[ 150], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 220], 00:21:04.041 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 321], 00:21:04.041 | 99.99th=[ 326] 00:21:04.041 bw ( KiB/s): min=75776, max=189440, per=10.93%, avg=131660.80, stdev=39782.41, samples=20 00:21:04.041 iops : min= 296, max= 740, avg=514.30, stdev=155.40, samples=20 00:21:04.041 lat (msec) : 20=0.12%, 50=2.92%, 100=37.88%, 250=56.88%, 500=2.21% 00:21:04.041 cpu : usr=1.41%, sys=1.42%, ctx=1988, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,5206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job7: (groupid=0, jobs=1): err= 0: pid=664191: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=452, BW=113MiB/s (119MB/s)(1148MiB/10152msec); 0 zone resets 00:21:04.041 slat (usec): min=20, max=167043, avg=1706.14, stdev=5759.05 00:21:04.041 clat (msec): min=5, max=565, avg=139.21, stdev=81.70 00:21:04.041 lat (msec): min=8, max=592, avg=140.92, stdev=82.63 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 22], 5.00th=[ 49], 10.00th=[ 64], 20.00th=[ 77], 00:21:04.041 | 30.00th=[ 91], 40.00th=[ 109], 50.00th=[ 122], 60.00th=[ 140], 00:21:04.041 | 70.00th=[ 161], 80.00th=[ 190], 90.00th=[ 234], 95.00th=[ 275], 00:21:04.041 | 99.00th=[ 506], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:21:04.041 | 99.99th=[ 567] 00:21:04.041 bw ( KiB/s): min=61952, max=166912, per=9.63%, avg=115951.05, stdev=37336.34, samples=20 00:21:04.041 iops : min= 242, max= 652, avg=452.90, stdev=145.87, samples=20 00:21:04.041 lat (msec) : 10=0.20%, 20=0.63%, 50=4.46%, 100=30.27%, 250=57.58% 00:21:04.041 lat (msec) : 500=5.81%, 750=1.05% 00:21:04.041 cpu : usr=1.35%, sys=1.19%, ctx=2118, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,4592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.041 job8: (groupid=0, jobs=1): err= 0: pid=664192: Wed Jul 24 17:46:24 2024 00:21:04.041 write: IOPS=530, BW=133MiB/s (139MB/s)(1357MiB/10237msec); 0 zone resets 00:21:04.041 slat (usec): min=18, max=145914, avg=1491.36, stdev=5022.96 00:21:04.041 clat (msec): min=5, max=586, avg=119.12, stdev=72.75 00:21:04.041 lat (msec): min=5, max=586, avg=120.61, stdev=73.51 00:21:04.041 clat percentiles (msec): 00:21:04.041 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 54], 20.00th=[ 59], 00:21:04.041 | 30.00th=[ 64], 40.00th=[ 85], 50.00th=[ 103], 60.00th=[ 133], 00:21:04.041 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 201], 95.00th=[ 251], 00:21:04.041 | 99.00th=[ 342], 99.50th=[ 447], 99.90th=[ 567], 99.95th=[ 567], 00:21:04.041 | 99.99th=[ 584] 00:21:04.041 bw ( KiB/s): min=47104, max=268288, per=11.41%, avg=137354.25, stdev=61118.17, samples=20 00:21:04.041 iops : min= 184, max= 1048, avg=536.50, stdev=238.77, samples=20 00:21:04.041 lat (msec) : 10=0.35%, 20=1.31%, 50=7.20%, 100=39.95%, 250=46.14% 00:21:04.041 lat (msec) : 500=4.72%, 750=0.33% 00:21:04.041 cpu : usr=1.79%, sys=1.57%, ctx=2414, majf=0, minf=1 00:21:04.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:04.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.041 issued rwts: total=0,5429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.042 job9: (groupid=0, jobs=1): err= 0: pid=664193: Wed Jul 24 17:46:24 2024 00:21:04.042 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(765MiB/10307msec); 0 zone resets 00:21:04.042 slat (usec): min=24, max=229648, avg=2657.85, stdev=9894.73 00:21:04.042 clat (msec): min=12, max=882, avg=212.01, stdev=147.20 00:21:04.042 lat (msec): min=12, max=882, avg=214.67, stdev=149.42 00:21:04.042 clat percentiles (msec): 00:21:04.042 | 1.00th=[ 19], 5.00th=[ 49], 10.00th=[ 66], 20.00th=[ 101], 00:21:04.042 | 30.00th=[ 140], 40.00th=[ 161], 50.00th=[ 178], 60.00th=[ 194], 00:21:04.042 | 70.00th=[ 224], 80.00th=[ 271], 90.00th=[ 447], 95.00th=[ 567], 00:21:04.042 | 99.00th=[ 609], 99.50th=[ 726], 99.90th=[ 844], 99.95th=[ 885], 00:21:04.042 | 99.99th=[ 885] 00:21:04.042 bw ( KiB/s): min=26624, max=163840, per=6.37%, avg=76697.60, stdev=39898.19, samples=20 00:21:04.042 iops : min= 104, max= 640, avg=299.60, stdev=155.85, samples=20 00:21:04.042 lat (msec) : 20=1.14%, 50=4.48%, 100=14.28%, 250=57.09%, 500=14.35% 00:21:04.042 lat (msec) : 750=8.20%, 1000=0.46% 00:21:04.042 cpu : usr=0.75%, sys=0.92%, ctx=1627, majf=0, minf=1 00:21:04.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:21:04.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.042 issued rwts: total=0,3060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.042 job10: (groupid=0, jobs=1): err= 0: pid=664195: Wed Jul 24 17:46:24 2024 00:21:04.042 write: IOPS=515, BW=129MiB/s (135MB/s)(1309MiB/10154msec); 0 zone resets 00:21:04.042 slat (usec): min=24, max=107484, avg=1586.87, stdev=4270.03 00:21:04.042 clat (msec): min=11, max=419, avg=121.96, stdev=51.32 00:21:04.042 lat (msec): min=11, max=419, avg=123.55, stdev=51.82 00:21:04.042 clat percentiles (msec): 00:21:04.042 | 1.00th=[ 27], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 79], 00:21:04.042 | 30.00th=[ 91], 40.00th=[ 110], 50.00th=[ 121], 60.00th=[ 130], 00:21:04.042 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 201], 00:21:04.042 | 99.00th=[ 313], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 418], 00:21:04.042 | 99.99th=[ 422] 00:21:04.042 bw ( KiB/s): min=62976, max=205824, per=10.99%, avg=132386.85, stdev=38997.54, samples=20 00:21:04.042 iops : min= 246, max= 804, avg=517.10, stdev=152.37, samples=20 00:21:04.042 lat (msec) : 20=0.27%, 50=2.94%, 100=31.27%, 250=63.34%, 500=2.18% 00:21:04.042 cpu : usr=1.42%, sys=1.52%, ctx=2178, majf=0, minf=1 00:21:04.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:04.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:04.042 issued rwts: total=0,5235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:04.042 00:21:04.042 Run status group 0 (all jobs): 00:21:04.042 WRITE: bw=1176MiB/s (1233MB/s), 74.2MiB/s-133MiB/s (77.8MB/s-139MB/s), io=11.8GiB (12.7GB), run=10060-10312msec 00:21:04.042 00:21:04.042 Disk stats (read/write): 00:21:04.042 nvme0n1: ios=49/8297, merge=0/0, ticks=54/1235618, in_queue=1235672, util=97.00% 00:21:04.042 nvme10n1: ios=48/8948, merge=0/0, ticks=532/1213407, in_queue=1213939, util=99.89% 00:21:04.042 nvme1n1: ios=44/7768, merge=0/0, ticks=3825/1137860, in_queue=1141685, util=99.91% 00:21:04.042 nvme2n1: ios=21/7353, merge=0/0, ticks=63/1215644, in_queue=1215707, util=97.95% 00:21:04.042 nvme3n1: ios=27/8354, merge=0/0, ticks=83/1239880, in_queue=1239963, util=98.08% 00:21:04.042 nvme4n1: ios=46/8562, merge=0/0, ticks=3678/1205975, in_queue=1209653, util=99.95% 00:21:04.042 nvme5n1: ios=0/10411, merge=0/0, ticks=0/1242127, in_queue=1242127, util=98.30% 00:21:04.042 nvme6n1: ios=46/9003, merge=0/0, ticks=1032/1206506, in_queue=1207538, util=100.00% 00:21:04.042 nvme7n1: ios=0/10806, merge=0/0, ticks=0/1234475, in_queue=1234475, util=98.80% 00:21:04.042 nvme8n1: ios=44/6037, merge=0/0, ticks=3477/1200056, in_queue=1203533, util=100.00% 00:21:04.042 nvme9n1: ios=50/10290, merge=0/0, ticks=989/1203651, in_queue=1204640, util=100.00% 00:21:04.042 17:46:24 -- target/multiconnection.sh@36 -- # sync 00:21:04.042 17:46:24 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:04.042 17:46:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.042 17:46:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:04.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.042 17:46:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:04.042 17:46:25 -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.042 17:46:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:04.042 17:46:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:21:04.042 17:46:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:04.042 17:46:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:21:04.042 17:46:25 -- common/autotest_common.sh@1210 -- # return 0 00:21:04.042 17:46:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.042 17:46:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.042 17:46:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.042 17:46:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.042 17:46:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.042 17:46:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:04.042 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:04.042 17:46:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:04.042 17:46:25 -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.042 17:46:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:04.042 17:46:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:21:04.042 17:46:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:04.042 17:46:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:21:04.042 17:46:25 -- common/autotest_common.sh@1210 -- # return 0 00:21:04.042 17:46:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:04.042 17:46:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.042 17:46:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.042 17:46:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.042 17:46:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.042 17:46:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:04.302 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:04.562 17:46:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:04.562 17:46:25 -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.562 17:46:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:04.562 17:46:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:21:04.562 17:46:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:04.562 17:46:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:21:04.562 17:46:25 -- common/autotest_common.sh@1210 -- # return 0 00:21:04.562 17:46:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:04.562 17:46:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.562 17:46:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.562 17:46:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.562 17:46:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.562 17:46:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:04.822 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:04.822 17:46:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:04.822 17:46:26 -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.822 17:46:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:04.822 17:46:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:21:04.822 17:46:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:04.822 17:46:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:21:04.822 17:46:26 -- common/autotest_common.sh@1210 -- # return 0 00:21:04.822 17:46:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:04.822 17:46:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.822 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:21:04.822 17:46:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.822 17:46:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.822 17:46:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:04.822 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:04.822 17:46:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:04.822 17:46:26 -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.822 17:46:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:04.822 17:46:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:21:04.822 17:46:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:04.822 17:46:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:21:04.822 17:46:26 -- common/autotest_common.sh@1210 -- # return 0 00:21:04.822 17:46:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:04.822 17:46:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:04.822 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:21:04.822 17:46:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:04.822 17:46:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.822 17:46:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:05.082 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:05.082 17:46:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:05.082 17:46:26 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.082 17:46:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.082 17:46:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:21:05.082 17:46:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.082 17:46:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:21:05.082 17:46:26 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.082 17:46:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:05.082 17:46:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.082 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.082 17:46:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.082 17:46:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.082 17:46:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:05.341 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:05.341 17:46:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:05.341 17:46:26 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.341 17:46:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.341 17:46:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:21:05.601 17:46:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.601 17:46:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:21:05.601 17:46:26 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.601 17:46:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:05.601 17:46:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.601 17:46:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.601 17:46:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.601 17:46:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.601 17:46:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:05.601 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:05.601 17:46:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:05.601 17:46:27 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.601 17:46:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.601 17:46:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:21:05.601 17:46:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.601 17:46:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:21:05.601 17:46:27 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.601 17:46:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:05.601 17:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.601 17:46:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.601 17:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.601 17:46:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.601 17:46:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:05.861 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:05.861 17:46:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:05.861 17:46:27 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:21:05.861 17:46:27 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.861 17:46:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:05.861 17:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.861 17:46:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.861 17:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.861 17:46:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.861 17:46:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:05.861 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:05.861 17:46:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:05.861 17:46:27 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.861 17:46:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:05.861 17:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.861 17:46:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.861 17:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.861 17:46:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.861 17:46:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:05.861 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:05.861 17:46:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:05.861 17:46:27 -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:05.861 17:46:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:21:05.861 17:46:27 -- common/autotest_common.sh@1210 -- # return 0 00:21:05.861 17:46:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:05.861 17:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.861 17:46:27 -- common/autotest_common.sh@10 -- # set +x 00:21:06.121 17:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.121 17:46:27 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:06.121 17:46:27 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:06.121 17:46:27 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:06.121 17:46:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:06.121 17:46:27 -- nvmf/common.sh@116 -- # sync 00:21:06.121 17:46:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:06.121 17:46:27 -- nvmf/common.sh@119 -- # set +e 00:21:06.121 17:46:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:06.121 17:46:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:06.121 rmmod nvme_tcp 00:21:06.121 rmmod nvme_fabrics 00:21:06.121 rmmod nvme_keyring 00:21:06.121 17:46:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:06.121 17:46:27 -- nvmf/common.sh@123 -- # set -e 00:21:06.121 17:46:27 -- nvmf/common.sh@124 -- # return 0 00:21:06.121 17:46:27 -- nvmf/common.sh@477 -- # '[' -n 655714 ']' 00:21:06.121 17:46:27 -- nvmf/common.sh@478 -- # killprocess 655714 00:21:06.121 17:46:27 -- common/autotest_common.sh@926 -- # '[' -z 655714 ']' 00:21:06.121 17:46:27 -- common/autotest_common.sh@930 -- # kill -0 655714 00:21:06.121 17:46:27 -- common/autotest_common.sh@931 -- # uname 00:21:06.121 17:46:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:06.121 17:46:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 655714 00:21:06.121 17:46:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:06.121 17:46:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:06.121 17:46:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 655714' 00:21:06.121 killing process with pid 655714 00:21:06.121 17:46:27 -- common/autotest_common.sh@945 -- # kill 655714 00:21:06.121 17:46:27 -- common/autotest_common.sh@950 -- # wait 655714 00:21:06.691 17:46:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:06.691 17:46:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:06.691 17:46:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:06.691 17:46:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.691 17:46:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:06.691 17:46:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.691 17:46:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.691 17:46:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.605 17:46:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:08.605 00:21:08.605 real 1m10.536s 00:21:08.605 user 4m14.776s 00:21:08.605 sys 0m20.775s 00:21:08.605 17:46:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.605 17:46:30 -- common/autotest_common.sh@10 -- # set +x 00:21:08.605 ************************************ 00:21:08.605 END TEST nvmf_multiconnection 00:21:08.605 ************************************ 00:21:08.605 17:46:30 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:08.605 17:46:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:08.605 17:46:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.605 17:46:30 -- common/autotest_common.sh@10 -- # set +x 00:21:08.605 ************************************ 00:21:08.605 START TEST nvmf_initiator_timeout 00:21:08.605 ************************************ 00:21:08.605 17:46:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:08.605 * Looking for test storage... 00:21:08.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.605 17:46:30 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.605 17:46:30 -- nvmf/common.sh@7 -- # uname -s 00:21:08.605 17:46:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.605 17:46:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.605 17:46:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.605 17:46:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.605 17:46:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.605 17:46:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.605 17:46:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.605 17:46:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.605 17:46:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.605 17:46:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.865 17:46:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.865 17:46:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.865 17:46:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.865 17:46:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.865 17:46:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.865 17:46:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.865 17:46:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.865 17:46:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.865 17:46:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.865 17:46:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.865 17:46:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.865 17:46:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.865 17:46:30 -- paths/export.sh@5 -- # export PATH 00:21:08.865 17:46:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.865 17:46:30 -- nvmf/common.sh@46 -- # : 0 00:21:08.865 17:46:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:08.865 17:46:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:08.865 17:46:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:08.865 17:46:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.865 17:46:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.865 17:46:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:08.865 17:46:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:08.865 17:46:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:08.865 17:46:30 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.865 17:46:30 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.865 17:46:30 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:08.865 17:46:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:08.865 17:46:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.865 17:46:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:08.865 17:46:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:08.865 17:46:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:08.865 17:46:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.865 17:46:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.865 17:46:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.865 17:46:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:08.865 17:46:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:08.865 17:46:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:08.865 17:46:30 -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 17:46:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:14.144 17:46:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:14.144 17:46:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:14.144 17:46:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:14.144 17:46:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:14.144 17:46:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:14.144 17:46:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:14.144 17:46:35 -- nvmf/common.sh@294 -- # net_devs=() 00:21:14.144 17:46:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:14.144 17:46:35 -- nvmf/common.sh@295 -- # e810=() 00:21:14.144 17:46:35 -- nvmf/common.sh@295 -- # local -ga e810 00:21:14.144 17:46:35 -- nvmf/common.sh@296 -- # x722=() 00:21:14.144 17:46:35 -- nvmf/common.sh@296 -- # local -ga x722 00:21:14.144 17:46:35 -- nvmf/common.sh@297 -- # mlx=() 00:21:14.144 17:46:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:14.144 17:46:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.144 17:46:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.144 17:46:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.144 17:46:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.144 17:46:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.144 17:46:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.144 17:46:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.144 17:46:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.144 17:46:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.144 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.144 17:46:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.144 17:46:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.144 17:46:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.144 17:46:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.144 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.144 17:46:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:14.144 17:46:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:14.144 17:46:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.144 17:46:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.144 17:46:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:14.144 17:46:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.144 17:46:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.144 17:46:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:14.144 17:46:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.144 17:46:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.144 17:46:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:14.144 17:46:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:14.144 17:46:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.144 17:46:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.144 17:46:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.144 17:46:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.144 17:46:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:14.144 17:46:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.144 17:46:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.144 17:46:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.144 17:46:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:14.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:21:14.144 00:21:14.144 --- 10.0.0.2 ping statistics --- 00:21:14.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.144 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:14.144 17:46:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:21:14.144 00:21:14.144 --- 10.0.0.1 ping statistics --- 00:21:14.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.144 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:21:14.144 17:46:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.144 17:46:35 -- nvmf/common.sh@410 -- # return 0 00:21:14.144 17:46:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:14.144 17:46:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.144 17:46:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:14.144 17:46:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.144 17:46:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:14.144 17:46:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:14.144 17:46:35 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:14.144 17:46:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:14.144 17:46:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:14.144 17:46:35 -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 17:46:35 -- nvmf/common.sh@469 -- # nvmfpid=669426 00:21:14.144 17:46:35 -- nvmf/common.sh@470 -- # waitforlisten 669426 00:21:14.144 17:46:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.144 17:46:35 -- common/autotest_common.sh@819 -- # '[' -z 669426 ']' 00:21:14.144 17:46:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.144 17:46:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:14.144 17:46:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.144 17:46:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:14.144 17:46:35 -- common/autotest_common.sh@10 -- # set +x 00:21:14.404 [2024-07-24 17:46:35.752738] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:14.404 [2024-07-24 17:46:35.752782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.404 [2024-07-24 17:46:35.809673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.404 [2024-07-24 17:46:35.889326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:14.404 [2024-07-24 17:46:35.889449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.404 [2024-07-24 17:46:35.889457] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.404 [2024-07-24 17:46:35.889464] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.404 [2024-07-24 17:46:35.889501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.404 [2024-07-24 17:46:35.889600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.404 [2024-07-24 17:46:35.889673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.404 [2024-07-24 17:46:35.889674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.971 17:46:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:14.971 17:46:36 -- common/autotest_common.sh@852 -- # return 0 00:21:14.971 17:46:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:14.971 17:46:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:14.971 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 17:46:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 Malloc0 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 Delay0 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 [2024-07-24 17:46:36.624552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.231 17:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.231 17:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:15.231 [2024-07-24 17:46:36.649503] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.231 17:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.231 17:46:36 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:16.610 17:46:37 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:16.610 17:46:37 -- common/autotest_common.sh@1177 -- # local i=0 00:21:16.610 17:46:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.610 17:46:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:16.610 17:46:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:18.516 17:46:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:18.516 17:46:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:18.516 17:46:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:18.516 17:46:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:18.516 17:46:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.516 17:46:39 -- common/autotest_common.sh@1187 -- # return 0 00:21:18.516 17:46:39 -- target/initiator_timeout.sh@35 -- # fio_pid=670158 00:21:18.516 17:46:39 -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:18.516 17:46:39 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:18.516 [global] 00:21:18.516 thread=1 00:21:18.516 invalidate=1 00:21:18.516 rw=write 00:21:18.516 time_based=1 00:21:18.516 runtime=60 00:21:18.516 ioengine=libaio 00:21:18.516 direct=1 00:21:18.516 bs=4096 00:21:18.516 iodepth=1 00:21:18.516 norandommap=0 00:21:18.516 numjobs=1 00:21:18.516 00:21:18.516 verify_dump=1 00:21:18.516 verify_backlog=512 00:21:18.516 verify_state_save=0 00:21:18.516 do_verify=1 00:21:18.516 verify=crc32c-intel 00:21:18.516 [job0] 00:21:18.516 filename=/dev/nvme0n1 00:21:18.516 Could not set queue depth (nvme0n1) 00:21:18.773 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:18.773 fio-3.35 00:21:18.773 Starting 1 thread 00:21:21.304 17:46:42 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:21.304 17:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.304 17:46:42 -- common/autotest_common.sh@10 -- # set +x 00:21:21.304 true 00:21:21.304 17:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.304 17:46:42 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:21.304 17:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.304 17:46:42 -- common/autotest_common.sh@10 -- # set +x 00:21:21.304 true 00:21:21.304 17:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.304 17:46:42 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:21.304 17:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.304 17:46:42 -- common/autotest_common.sh@10 -- # set +x 00:21:21.304 true 00:21:21.304 17:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.304 17:46:42 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:21.304 17:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.304 17:46:42 -- common/autotest_common.sh@10 -- # set +x 00:21:21.304 true 00:21:21.304 17:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.304 17:46:42 -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:24.591 17:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.591 17:46:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.591 true 00:21:24.591 17:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:24.591 17:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.591 17:46:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.591 true 00:21:24.591 17:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:24.591 17:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.591 17:46:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.591 true 00:21:24.591 17:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:24.591 17:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.591 17:46:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.591 true 00:21:24.591 17:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:24.591 17:46:45 -- target/initiator_timeout.sh@54 -- # wait 670158 00:22:20.822 00:22:20.822 job0: (groupid=0, jobs=1): err= 0: pid=670276: Wed Jul 24 17:47:40 2024 00:22:20.822 read: IOPS=69, BW=279KiB/s (286kB/s)(16.3MiB/60024msec) 00:22:20.822 slat (usec): min=5, max=11178, avg=15.23, stdev=204.44 00:22:20.822 clat (usec): min=441, max=41644k, avg=13960.60, stdev=643849.90 00:22:20.822 lat (usec): min=448, max=41644k, avg=13975.83, stdev=643849.88 00:22:20.822 clat percentiles (usec): 00:22:20.822 | 1.00th=[ 478], 5.00th=[ 537], 10.00th=[ 578], 00:22:20.822 | 20.00th=[ 603], 30.00th=[ 611], 40.00th=[ 627], 00:22:20.822 | 50.00th=[ 644], 60.00th=[ 734], 70.00th=[ 914], 00:22:20.822 | 80.00th=[ 1020], 90.00th=[ 1139], 95.00th=[ 42206], 00:22:20.822 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:22:20.822 | 99.95th=[ 43254], 99.99th=[17112761] 00:22:20.822 write: IOPS=76, BW=307KiB/s (314kB/s)(18.0MiB/60024msec); 0 zone resets 00:22:20.822 slat (usec): min=9, max=31033, avg=18.84, stdev=457.00 00:22:20.822 clat (usec): min=223, max=3471, avg=310.51, stdev=106.94 00:22:20.822 lat (usec): min=234, max=31760, avg=329.35, stdev=475.47 00:22:20.822 clat percentiles (usec): 00:22:20.822 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:22:20.822 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:22:20.822 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 424], 95.00th=[ 469], 00:22:20.822 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 791], 99.95th=[ 824], 00:22:20.822 | 99.99th=[ 3458] 00:22:20.822 bw ( KiB/s): min= 496, max= 4096, per=100.00%, avg=3686.40, stdev=1131.78, samples=10 00:22:20.822 iops : min= 124, max= 1024, avg=921.60, stdev=282.94, samples=10 00:22:20.822 lat (usec) : 250=2.97%, 500=48.62%, 750=29.78%, 1000=8.19% 00:22:20.822 lat (msec) : 2=6.63%, 4=0.02%, 50=3.78%, >=2000=0.01% 00:22:20.822 cpu : usr=0.14%, sys=0.25%, ctx=8798, majf=0, minf=2 00:22:20.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.822 issued rwts: total=4184,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:20.822 00:22:20.822 Run status group 0 (all jobs): 00:22:20.822 READ: bw=279KiB/s (286kB/s), 279KiB/s-279KiB/s (286kB/s-286kB/s), io=16.3MiB (17.1MB), run=60024-60024msec 00:22:20.822 WRITE: bw=307KiB/s (314kB/s), 307KiB/s-307KiB/s (314kB/s-314kB/s), io=18.0MiB (18.9MB), run=60024-60024msec 00:22:20.822 00:22:20.822 Disk stats (read/write): 00:22:20.822 nvme0n1: ios=4233/4608, merge=0/0, ticks=17978/1379, in_queue=19357, util=99.86% 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:20.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:20.822 17:47:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:20.822 17:47:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:20.822 17:47:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:20.822 17:47:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:20.822 17:47:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:20.822 17:47:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:20.822 nvmf hotplug test: fio successful as expected 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.822 17:47:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.822 17:47:40 -- common/autotest_common.sh@10 -- # set +x 00:22:20.822 17:47:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:20.822 17:47:40 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:20.822 17:47:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:20.822 17:47:40 -- nvmf/common.sh@116 -- # sync 00:22:20.822 17:47:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:20.822 17:47:40 -- nvmf/common.sh@119 -- # set +e 00:22:20.822 17:47:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:20.822 17:47:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:20.822 rmmod nvme_tcp 00:22:20.822 rmmod nvme_fabrics 00:22:20.822 rmmod nvme_keyring 00:22:20.822 17:47:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:20.822 17:47:40 -- nvmf/common.sh@123 -- # set -e 00:22:20.822 17:47:40 -- nvmf/common.sh@124 -- # return 0 00:22:20.822 17:47:40 -- nvmf/common.sh@477 -- # '[' -n 669426 ']' 00:22:20.822 17:47:40 -- nvmf/common.sh@478 -- # killprocess 669426 00:22:20.822 17:47:40 -- common/autotest_common.sh@926 -- # '[' -z 669426 ']' 00:22:20.822 17:47:40 -- common/autotest_common.sh@930 -- # kill -0 669426 00:22:20.822 17:47:40 -- common/autotest_common.sh@931 -- # uname 00:22:20.822 17:47:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:20.822 17:47:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 669426 00:22:20.822 17:47:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:20.822 17:47:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:20.822 17:47:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 669426' 00:22:20.822 killing process with pid 669426 00:22:20.822 17:47:40 -- common/autotest_common.sh@945 -- # kill 669426 00:22:20.822 17:47:40 -- common/autotest_common.sh@950 -- # wait 669426 00:22:20.822 17:47:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:20.822 17:47:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:20.822 17:47:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:20.822 17:47:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.822 17:47:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:20.822 17:47:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.822 17:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.822 17:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.391 17:47:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:21.391 00:22:21.391 real 1m12.732s 00:22:21.391 user 4m24.003s 00:22:21.391 sys 0m5.920s 00:22:21.391 17:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.391 17:47:42 -- common/autotest_common.sh@10 -- # set +x 00:22:21.391 ************************************ 00:22:21.391 END TEST nvmf_initiator_timeout 00:22:21.391 ************************************ 00:22:21.391 17:47:42 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:22:21.391 17:47:42 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:22:21.391 17:47:42 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:22:21.391 17:47:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:21.391 17:47:42 -- common/autotest_common.sh@10 -- # set +x 00:22:26.667 17:47:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:26.667 17:47:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:26.667 17:47:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:26.667 17:47:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:26.667 17:47:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:26.667 17:47:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:26.667 17:47:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:26.667 17:47:48 -- nvmf/common.sh@294 -- # net_devs=() 00:22:26.667 17:47:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:26.667 17:47:48 -- nvmf/common.sh@295 -- # e810=() 00:22:26.667 17:47:48 -- nvmf/common.sh@295 -- # local -ga e810 00:22:26.667 17:47:48 -- nvmf/common.sh@296 -- # x722=() 00:22:26.667 17:47:48 -- nvmf/common.sh@296 -- # local -ga x722 00:22:26.667 17:47:48 -- nvmf/common.sh@297 -- # mlx=() 00:22:26.667 17:47:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:26.667 17:47:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.667 17:47:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:26.667 17:47:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:26.667 17:47:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:26.667 17:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:26.667 17:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:26.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:26.667 17:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:26.667 17:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:26.667 17:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:26.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:26.668 17:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:26.668 17:47:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:26.668 17:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:26.668 17:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.668 17:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:26.668 17:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.668 17:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:26.668 Found net devices under 0000:86:00.0: cvl_0_0 00:22:26.668 17:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.668 17:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:26.668 17:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.668 17:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:26.668 17:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.668 17:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:26.668 Found net devices under 0000:86:00.1: cvl_0_1 00:22:26.668 17:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.668 17:47:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:26.668 17:47:48 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.668 17:47:48 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:22:26.668 17:47:48 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:26.668 17:47:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.668 17:47:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.668 17:47:48 -- common/autotest_common.sh@10 -- # set +x 00:22:26.668 ************************************ 00:22:26.668 START TEST nvmf_perf_adq 00:22:26.668 ************************************ 00:22:26.668 17:47:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:26.928 * Looking for test storage... 00:22:26.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.928 17:47:48 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.928 17:47:48 -- nvmf/common.sh@7 -- # uname -s 00:22:26.928 17:47:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.928 17:47:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.928 17:47:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.928 17:47:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.928 17:47:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.928 17:47:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.928 17:47:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.928 17:47:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.928 17:47:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.928 17:47:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.928 17:47:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.928 17:47:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.928 17:47:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.928 17:47:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.928 17:47:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.928 17:47:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.928 17:47:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.928 17:47:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.928 17:47:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.928 17:47:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.928 17:47:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.928 17:47:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.928 17:47:48 -- paths/export.sh@5 -- # export PATH 00:22:26.928 17:47:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.928 17:47:48 -- nvmf/common.sh@46 -- # : 0 00:22:26.928 17:47:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.928 17:47:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.928 17:47:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.928 17:47:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.929 17:47:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.929 17:47:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.929 17:47:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.929 17:47:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.929 17:47:48 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:26.929 17:47:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:26.929 17:47:48 -- common/autotest_common.sh@10 -- # set +x 00:22:32.205 17:47:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:32.205 17:47:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:32.205 17:47:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:32.205 17:47:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:32.205 17:47:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:32.205 17:47:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:32.205 17:47:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:32.205 17:47:53 -- nvmf/common.sh@294 -- # net_devs=() 00:22:32.205 17:47:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:32.205 17:47:53 -- nvmf/common.sh@295 -- # e810=() 00:22:32.205 17:47:53 -- nvmf/common.sh@295 -- # local -ga e810 00:22:32.205 17:47:53 -- nvmf/common.sh@296 -- # x722=() 00:22:32.205 17:47:53 -- nvmf/common.sh@296 -- # local -ga x722 00:22:32.205 17:47:53 -- nvmf/common.sh@297 -- # mlx=() 00:22:32.205 17:47:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:32.205 17:47:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.205 17:47:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:32.205 17:47:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:32.205 17:47:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:32.205 17:47:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.205 17:47:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.205 17:47:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.205 17:47:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.205 17:47:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:32.205 17:47:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:32.205 17:47:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.205 17:47:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.205 17:47:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.205 17:47:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.205 17:47:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.205 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.205 17:47:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.205 17:47:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.205 17:47:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.205 17:47:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.205 17:47:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.205 17:47:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.205 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.205 17:47:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.205 17:47:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:32.205 17:47:53 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.205 17:47:53 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:32.205 17:47:53 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:32.205 17:47:53 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:22:32.205 17:47:53 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:33.142 17:47:54 -- target/perf_adq.sh@53 -- # modprobe ice 00:22:35.049 17:47:56 -- target/perf_adq.sh@54 -- # sleep 5 00:22:40.328 17:48:01 -- target/perf_adq.sh@67 -- # nvmftestinit 00:22:40.329 17:48:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:40.329 17:48:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.329 17:48:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:40.329 17:48:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:40.329 17:48:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:40.329 17:48:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.329 17:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.329 17:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.329 17:48:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:40.329 17:48:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:40.329 17:48:01 -- common/autotest_common.sh@10 -- # set +x 00:22:40.329 17:48:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:40.329 17:48:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:40.329 17:48:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:40.329 17:48:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:40.329 17:48:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:40.329 17:48:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:40.329 17:48:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:40.329 17:48:01 -- nvmf/common.sh@294 -- # net_devs=() 00:22:40.329 17:48:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:40.329 17:48:01 -- nvmf/common.sh@295 -- # e810=() 00:22:40.329 17:48:01 -- nvmf/common.sh@295 -- # local -ga e810 00:22:40.329 17:48:01 -- nvmf/common.sh@296 -- # x722=() 00:22:40.329 17:48:01 -- nvmf/common.sh@296 -- # local -ga x722 00:22:40.329 17:48:01 -- nvmf/common.sh@297 -- # mlx=() 00:22:40.329 17:48:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:40.329 17:48:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.329 17:48:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.329 17:48:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.329 17:48:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.329 17:48:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.329 17:48:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.329 17:48:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.329 17:48:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.329 17:48:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.329 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.329 17:48:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.329 17:48:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.329 17:48:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.329 17:48:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.329 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.329 17:48:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:40.329 17:48:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:40.329 17:48:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.329 17:48:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.329 17:48:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:40.329 17:48:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.329 17:48:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.329 17:48:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:40.329 17:48:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.329 17:48:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.329 17:48:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:40.329 17:48:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:40.329 17:48:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.329 17:48:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.329 17:48:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.329 17:48:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.329 17:48:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:40.329 17:48:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.329 17:48:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.329 17:48:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.329 17:48:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:40.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:22:40.329 00:22:40.329 --- 10.0.0.2 ping statistics --- 00:22:40.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.329 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:22:40.329 17:48:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:22:40.329 00:22:40.329 --- 10.0.0.1 ping statistics --- 00:22:40.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.329 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:22:40.329 17:48:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.329 17:48:01 -- nvmf/common.sh@410 -- # return 0 00:22:40.329 17:48:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:40.329 17:48:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.329 17:48:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:40.329 17:48:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.329 17:48:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:40.329 17:48:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:40.329 17:48:01 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:40.329 17:48:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:40.329 17:48:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:40.329 17:48:01 -- common/autotest_common.sh@10 -- # set +x 00:22:40.329 17:48:01 -- nvmf/common.sh@469 -- # nvmfpid=688173 00:22:40.329 17:48:01 -- nvmf/common.sh@470 -- # waitforlisten 688173 00:22:40.329 17:48:01 -- common/autotest_common.sh@819 -- # '[' -z 688173 ']' 00:22:40.329 17:48:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.329 17:48:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:40.329 17:48:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.329 17:48:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:40.329 17:48:01 -- common/autotest_common.sh@10 -- # set +x 00:22:40.329 17:48:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:40.329 [2024-07-24 17:48:01.665537] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.329 [2024-07-24 17:48:01.665581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.329 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.329 [2024-07-24 17:48:01.723463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.329 [2024-07-24 17:48:01.807019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:40.329 [2024-07-24 17:48:01.807131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.330 [2024-07-24 17:48:01.807139] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.330 [2024-07-24 17:48:01.807146] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.330 [2024-07-24 17:48:01.807188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.330 [2024-07-24 17:48:01.807205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.330 [2024-07-24 17:48:01.807296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.330 [2024-07-24 17:48:01.807297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.898 17:48:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.898 17:48:02 -- common/autotest_common.sh@852 -- # return 0 00:22:40.898 17:48:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:40.898 17:48:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:40.898 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 17:48:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.157 17:48:02 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:22:41.157 17:48:02 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 [2024-07-24 17:48:02.612824] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 Malloc1 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.157 17:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.157 17:48:02 -- common/autotest_common.sh@10 -- # set +x 00:22:41.157 [2024-07-24 17:48:02.660641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.157 17:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.157 17:48:02 -- target/perf_adq.sh@73 -- # perfpid=688342 00:22:41.157 17:48:02 -- target/perf_adq.sh@74 -- # sleep 2 00:22:41.157 17:48:02 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:41.157 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.106 17:48:04 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:22:43.106 17:48:04 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:43.106 17:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.106 17:48:04 -- target/perf_adq.sh@76 -- # wc -l 00:22:43.106 17:48:04 -- common/autotest_common.sh@10 -- # set +x 00:22:43.106 17:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.364 17:48:04 -- target/perf_adq.sh@76 -- # count=4 00:22:43.364 17:48:04 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:22:43.364 17:48:04 -- target/perf_adq.sh@81 -- # wait 688342 00:22:51.476 Initializing NVMe Controllers 00:22:51.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:51.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:51.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:51.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:51.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:51.476 Initialization complete. Launching workers. 00:22:51.476 ======================================================== 00:22:51.476 Latency(us) 00:22:51.476 Device Information : IOPS MiB/s Average min max 00:22:51.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10975.40 42.87 5848.09 1721.56 46312.01 00:22:51.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10963.00 42.82 5837.79 1503.31 11834.16 00:22:51.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10925.50 42.68 5858.18 1468.02 11547.30 00:22:51.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10724.20 41.89 5967.47 1773.64 11726.16 00:22:51.476 ======================================================== 00:22:51.476 Total : 43588.09 170.27 5877.40 1468.02 46312.01 00:22:51.476 00:22:51.476 17:48:12 -- target/perf_adq.sh@82 -- # nvmftestfini 00:22:51.476 17:48:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:51.476 17:48:12 -- nvmf/common.sh@116 -- # sync 00:22:51.476 17:48:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:51.476 17:48:12 -- nvmf/common.sh@119 -- # set +e 00:22:51.476 17:48:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:51.476 17:48:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:51.476 rmmod nvme_tcp 00:22:51.476 rmmod nvme_fabrics 00:22:51.476 rmmod nvme_keyring 00:22:51.476 17:48:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:51.476 17:48:12 -- nvmf/common.sh@123 -- # set -e 00:22:51.476 17:48:12 -- nvmf/common.sh@124 -- # return 0 00:22:51.476 17:48:12 -- nvmf/common.sh@477 -- # '[' -n 688173 ']' 00:22:51.476 17:48:12 -- nvmf/common.sh@478 -- # killprocess 688173 00:22:51.476 17:48:12 -- common/autotest_common.sh@926 -- # '[' -z 688173 ']' 00:22:51.476 17:48:12 -- common/autotest_common.sh@930 -- # kill -0 688173 00:22:51.476 17:48:12 -- common/autotest_common.sh@931 -- # uname 00:22:51.476 17:48:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:51.476 17:48:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 688173 00:22:51.476 17:48:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:51.476 17:48:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:51.476 17:48:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 688173' 00:22:51.476 killing process with pid 688173 00:22:51.476 17:48:12 -- common/autotest_common.sh@945 -- # kill 688173 00:22:51.476 17:48:12 -- common/autotest_common.sh@950 -- # wait 688173 00:22:51.736 17:48:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:51.736 17:48:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:51.736 17:48:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:51.736 17:48:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.736 17:48:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:51.736 17:48:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.736 17:48:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.736 17:48:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.277 17:48:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.277 17:48:15 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:22:54.277 17:48:15 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:54.847 17:48:16 -- target/perf_adq.sh@53 -- # modprobe ice 00:22:56.758 17:48:18 -- target/perf_adq.sh@54 -- # sleep 5 00:23:02.038 17:48:23 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:02.038 17:48:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:02.038 17:48:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.038 17:48:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:02.038 17:48:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:02.038 17:48:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:02.038 17:48:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.038 17:48:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.038 17:48:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.038 17:48:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:02.038 17:48:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:02.038 17:48:23 -- common/autotest_common.sh@10 -- # set +x 00:23:02.038 17:48:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:02.038 17:48:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:02.038 17:48:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:02.038 17:48:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:02.038 17:48:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:02.038 17:48:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:02.038 17:48:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:02.038 17:48:23 -- nvmf/common.sh@294 -- # net_devs=() 00:23:02.038 17:48:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:02.038 17:48:23 -- nvmf/common.sh@295 -- # e810=() 00:23:02.038 17:48:23 -- nvmf/common.sh@295 -- # local -ga e810 00:23:02.038 17:48:23 -- nvmf/common.sh@296 -- # x722=() 00:23:02.038 17:48:23 -- nvmf/common.sh@296 -- # local -ga x722 00:23:02.038 17:48:23 -- nvmf/common.sh@297 -- # mlx=() 00:23:02.038 17:48:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:02.038 17:48:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.038 17:48:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:02.038 17:48:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:02.038 17:48:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:02.038 17:48:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:02.038 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:02.038 17:48:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:02.038 17:48:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:02.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:02.038 17:48:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:02.038 17:48:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.038 17:48:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.038 17:48:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:02.038 Found net devices under 0000:86:00.0: cvl_0_0 00:23:02.038 17:48:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.038 17:48:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:02.038 17:48:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.038 17:48:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.038 17:48:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:02.038 Found net devices under 0000:86:00.1: cvl_0_1 00:23:02.038 17:48:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.038 17:48:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:02.038 17:48:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:02.038 17:48:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:02.038 17:48:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.038 17:48:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.038 17:48:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.038 17:48:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:02.038 17:48:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.038 17:48:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.038 17:48:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:02.038 17:48:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.038 17:48:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.038 17:48:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:02.038 17:48:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:02.038 17:48:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.038 17:48:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.038 17:48:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.038 17:48:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.038 17:48:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:02.038 17:48:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.038 17:48:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.038 17:48:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.038 17:48:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:02.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:02.039 00:23:02.039 --- 10.0.0.2 ping statistics --- 00:23:02.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.039 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:02.039 17:48:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:23:02.039 00:23:02.039 --- 10.0.0.1 ping statistics --- 00:23:02.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.039 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:23:02.039 17:48:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.039 17:48:23 -- nvmf/common.sh@410 -- # return 0 00:23:02.039 17:48:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:02.039 17:48:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.039 17:48:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:02.039 17:48:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:02.039 17:48:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.039 17:48:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:02.039 17:48:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:02.039 17:48:23 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:02.039 17:48:23 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:02.039 17:48:23 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:02.039 17:48:23 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:02.039 net.core.busy_poll = 1 00:23:02.039 17:48:23 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:02.039 net.core.busy_read = 1 00:23:02.039 17:48:23 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:02.039 17:48:23 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:02.039 17:48:23 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:02.039 17:48:23 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:02.039 17:48:23 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:02.039 17:48:23 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:02.039 17:48:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:02.039 17:48:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:02.039 17:48:23 -- common/autotest_common.sh@10 -- # set +x 00:23:02.039 17:48:23 -- nvmf/common.sh@469 -- # nvmfpid=692466 00:23:02.039 17:48:23 -- nvmf/common.sh@470 -- # waitforlisten 692466 00:23:02.039 17:48:23 -- common/autotest_common.sh@819 -- # '[' -z 692466 ']' 00:23:02.039 17:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.039 17:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.039 17:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.039 17:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.039 17:48:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.039 17:48:23 -- common/autotest_common.sh@10 -- # set +x 00:23:02.300 [2024-07-24 17:48:23.684420] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:02.300 [2024-07-24 17:48:23.684470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.300 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.300 [2024-07-24 17:48:23.742476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.300 [2024-07-24 17:48:23.821516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:02.300 [2024-07-24 17:48:23.821621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.300 [2024-07-24 17:48:23.821629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.300 [2024-07-24 17:48:23.821635] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.300 [2024-07-24 17:48:23.821670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.300 [2024-07-24 17:48:23.821771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.300 [2024-07-24 17:48:23.821835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.300 [2024-07-24 17:48:23.821836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.254 17:48:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:03.254 17:48:24 -- common/autotest_common.sh@852 -- # return 0 00:23:03.254 17:48:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:03.254 17:48:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 17:48:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.254 17:48:24 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:03.254 17:48:24 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 [2024-07-24 17:48:24.599559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 Malloc1 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.254 17:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.254 17:48:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.254 [2024-07-24 17:48:24.643403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.254 17:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.254 17:48:24 -- target/perf_adq.sh@94 -- # perfpid=692722 00:23:03.254 17:48:24 -- target/perf_adq.sh@95 -- # sleep 2 00:23:03.254 17:48:24 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:03.254 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.155 17:48:26 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:05.155 17:48:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.155 17:48:26 -- target/perf_adq.sh@97 -- # wc -l 00:23:05.155 17:48:26 -- common/autotest_common.sh@10 -- # set +x 00:23:05.155 17:48:26 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:05.155 17:48:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.155 17:48:26 -- target/perf_adq.sh@97 -- # count=2 00:23:05.155 17:48:26 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:05.155 17:48:26 -- target/perf_adq.sh@103 -- # wait 692722 00:23:13.282 Initializing NVMe Controllers 00:23:13.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:13.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:13.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:13.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:13.282 Initialization complete. Launching workers. 00:23:13.282 ======================================================== 00:23:13.282 Latency(us) 00:23:13.282 Device Information : IOPS MiB/s Average min max 00:23:13.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8215.60 32.09 7790.92 1673.60 53852.55 00:23:13.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7499.20 29.29 8535.06 1573.04 53028.55 00:23:13.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7450.10 29.10 8590.48 1833.37 54997.78 00:23:13.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7824.30 30.56 8180.94 1708.14 53334.09 00:23:13.282 ======================================================== 00:23:13.282 Total : 30989.19 121.05 8261.69 1573.04 54997.78 00:23:13.282 00:23:13.282 17:48:34 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:13.282 17:48:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.282 17:48:34 -- nvmf/common.sh@116 -- # sync 00:23:13.282 17:48:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:13.282 17:48:34 -- nvmf/common.sh@119 -- # set +e 00:23:13.282 17:48:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.282 17:48:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:13.282 rmmod nvme_tcp 00:23:13.282 rmmod nvme_fabrics 00:23:13.541 rmmod nvme_keyring 00:23:13.541 17:48:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.541 17:48:34 -- nvmf/common.sh@123 -- # set -e 00:23:13.541 17:48:34 -- nvmf/common.sh@124 -- # return 0 00:23:13.541 17:48:34 -- nvmf/common.sh@477 -- # '[' -n 692466 ']' 00:23:13.541 17:48:34 -- nvmf/common.sh@478 -- # killprocess 692466 00:23:13.541 17:48:34 -- common/autotest_common.sh@926 -- # '[' -z 692466 ']' 00:23:13.541 17:48:34 -- common/autotest_common.sh@930 -- # kill -0 692466 00:23:13.541 17:48:34 -- common/autotest_common.sh@931 -- # uname 00:23:13.541 17:48:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:13.541 17:48:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 692466 00:23:13.541 17:48:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:13.541 17:48:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:13.541 17:48:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 692466' 00:23:13.541 killing process with pid 692466 00:23:13.541 17:48:34 -- common/autotest_common.sh@945 -- # kill 692466 00:23:13.541 17:48:34 -- common/autotest_common.sh@950 -- # wait 692466 00:23:13.801 17:48:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:13.801 17:48:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:13.801 17:48:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:13.801 17:48:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.801 17:48:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:13.801 17:48:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.801 17:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.801 17:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.093 17:48:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:17.093 17:48:38 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:23:17.093 00:23:17.093 real 0m50.075s 00:23:17.093 user 2m48.447s 00:23:17.093 sys 0m9.786s 00:23:17.093 17:48:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.093 17:48:38 -- common/autotest_common.sh@10 -- # set +x 00:23:17.093 ************************************ 00:23:17.093 END TEST nvmf_perf_adq 00:23:17.093 ************************************ 00:23:17.093 17:48:38 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.093 17:48:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:17.093 17:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:17.093 17:48:38 -- common/autotest_common.sh@10 -- # set +x 00:23:17.093 ************************************ 00:23:17.093 START TEST nvmf_shutdown 00:23:17.093 ************************************ 00:23:17.093 17:48:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.093 * Looking for test storage... 00:23:17.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.093 17:48:38 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.093 17:48:38 -- nvmf/common.sh@7 -- # uname -s 00:23:17.093 17:48:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.093 17:48:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.094 17:48:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.094 17:48:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.094 17:48:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.094 17:48:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.094 17:48:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.094 17:48:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.094 17:48:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.094 17:48:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.094 17:48:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.094 17:48:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.094 17:48:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.094 17:48:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.094 17:48:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.094 17:48:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.094 17:48:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.094 17:48:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.094 17:48:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.094 17:48:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.094 17:48:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.094 17:48:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.094 17:48:38 -- paths/export.sh@5 -- # export PATH 00:23:17.094 17:48:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.094 17:48:38 -- nvmf/common.sh@46 -- # : 0 00:23:17.094 17:48:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:17.094 17:48:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:17.094 17:48:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:17.094 17:48:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.094 17:48:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.094 17:48:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:17.094 17:48:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:17.094 17:48:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:17.094 17:48:38 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.094 17:48:38 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.094 17:48:38 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:17.094 17:48:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:17.094 17:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:17.094 17:48:38 -- common/autotest_common.sh@10 -- # set +x 00:23:17.094 ************************************ 00:23:17.094 START TEST nvmf_shutdown_tc1 00:23:17.094 ************************************ 00:23:17.094 17:48:38 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:23:17.094 17:48:38 -- target/shutdown.sh@74 -- # starttarget 00:23:17.094 17:48:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.094 17:48:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:17.094 17:48:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.094 17:48:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:17.094 17:48:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:17.094 17:48:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:17.094 17:48:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.094 17:48:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.094 17:48:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.094 17:48:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:17.094 17:48:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:17.094 17:48:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:17.094 17:48:38 -- common/autotest_common.sh@10 -- # set +x 00:23:22.375 17:48:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:22.376 17:48:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:22.376 17:48:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:22.376 17:48:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:22.376 17:48:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:22.376 17:48:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:22.376 17:48:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:22.376 17:48:43 -- nvmf/common.sh@294 -- # net_devs=() 00:23:22.376 17:48:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:22.376 17:48:43 -- nvmf/common.sh@295 -- # e810=() 00:23:22.376 17:48:43 -- nvmf/common.sh@295 -- # local -ga e810 00:23:22.376 17:48:43 -- nvmf/common.sh@296 -- # x722=() 00:23:22.376 17:48:43 -- nvmf/common.sh@296 -- # local -ga x722 00:23:22.376 17:48:43 -- nvmf/common.sh@297 -- # mlx=() 00:23:22.376 17:48:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:22.376 17:48:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.376 17:48:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:22.376 17:48:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:22.376 17:48:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.376 17:48:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:22.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:22.376 17:48:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.376 17:48:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:22.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:22.376 17:48:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.376 17:48:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.376 17:48:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.376 17:48:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:22.376 Found net devices under 0000:86:00.0: cvl_0_0 00:23:22.376 17:48:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.376 17:48:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.376 17:48:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.376 17:48:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.376 17:48:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:22.376 Found net devices under 0000:86:00.1: cvl_0_1 00:23:22.376 17:48:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.376 17:48:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:22.376 17:48:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:22.376 17:48:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:22.376 17:48:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.376 17:48:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.376 17:48:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.376 17:48:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:22.376 17:48:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.376 17:48:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.376 17:48:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:22.376 17:48:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.376 17:48:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.376 17:48:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:22.376 17:48:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:22.376 17:48:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.376 17:48:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.376 17:48:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.376 17:48:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.376 17:48:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:22.376 17:48:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.376 17:48:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.376 17:48:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.637 17:48:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:22.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:22.637 00:23:22.637 --- 10.0.0.2 ping statistics --- 00:23:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.637 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:22.637 17:48:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:23:22.637 00:23:22.637 --- 10.0.0.1 ping statistics --- 00:23:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.637 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:23:22.637 17:48:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.637 17:48:44 -- nvmf/common.sh@410 -- # return 0 00:23:22.637 17:48:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:22.637 17:48:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.637 17:48:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:22.637 17:48:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:22.637 17:48:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.637 17:48:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:22.637 17:48:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:22.637 17:48:44 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:22.637 17:48:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:22.637 17:48:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:22.637 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:22.637 17:48:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:22.637 17:48:44 -- nvmf/common.sh@469 -- # nvmfpid=697992 00:23:22.637 17:48:44 -- nvmf/common.sh@470 -- # waitforlisten 697992 00:23:22.637 17:48:44 -- common/autotest_common.sh@819 -- # '[' -z 697992 ']' 00:23:22.637 17:48:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.637 17:48:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:22.637 17:48:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.637 17:48:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:22.637 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:22.637 [2024-07-24 17:48:44.062894] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:22.637 [2024-07-24 17:48:44.062941] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.637 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.637 [2024-07-24 17:48:44.122012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.637 [2024-07-24 17:48:44.199652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:22.637 [2024-07-24 17:48:44.199765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.637 [2024-07-24 17:48:44.199773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.637 [2024-07-24 17:48:44.199784] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.637 [2024-07-24 17:48:44.199883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.637 [2024-07-24 17:48:44.199967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.637 [2024-07-24 17:48:44.200085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.637 [2024-07-24 17:48:44.200085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:23.577 17:48:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:23.577 17:48:44 -- common/autotest_common.sh@852 -- # return 0 00:23:23.577 17:48:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:23.577 17:48:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:23.577 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:23.577 17:48:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.577 17:48:44 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.577 17:48:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.578 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:23.578 [2024-07-24 17:48:44.918300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.578 17:48:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.578 17:48:44 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:23.578 17:48:44 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:23.578 17:48:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:23.578 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:23.578 17:48:44 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:23.578 17:48:44 -- target/shutdown.sh@28 -- # cat 00:23:23.578 17:48:44 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:23.578 17:48:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.578 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:23:23.578 Malloc1 00:23:23.578 [2024-07-24 17:48:45.014465] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.578 Malloc2 00:23:23.578 Malloc3 00:23:23.578 Malloc4 00:23:23.578 Malloc5 00:23:23.838 Malloc6 00:23:23.838 Malloc7 00:23:23.838 Malloc8 00:23:23.838 Malloc9 00:23:23.838 Malloc10 00:23:23.838 17:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.838 17:48:45 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:23.838 17:48:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:23.838 17:48:45 -- common/autotest_common.sh@10 -- # set +x 00:23:24.098 17:48:45 -- target/shutdown.sh@78 -- # perfpid=698281 00:23:24.098 17:48:45 -- target/shutdown.sh@79 -- # waitforlisten 698281 /var/tmp/bdevperf.sock 00:23:24.098 17:48:45 -- common/autotest_common.sh@819 -- # '[' -z 698281 ']' 00:23:24.098 17:48:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.098 17:48:45 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:24.098 17:48:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:24.098 17:48:45 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:24.099 17:48:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.099 17:48:45 -- nvmf/common.sh@520 -- # config=() 00:23:24.099 17:48:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:24.099 17:48:45 -- nvmf/common.sh@520 -- # local subsystem config 00:23:24.099 17:48:45 -- common/autotest_common.sh@10 -- # set +x 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 [2024-07-24 17:48:45.489792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:24.099 [2024-07-24 17:48:45.489836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:24.099 { 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme$subsystem", 00:23:24.099 "trtype": "$TEST_TRANSPORT", 00:23:24.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "$NVMF_PORT", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.099 "hdgst": ${hdgst:-false}, 00:23:24.099 "ddgst": ${ddgst:-false} 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 } 00:23:24.099 EOF 00:23:24.099 )") 00:23:24.099 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.099 17:48:45 -- nvmf/common.sh@542 -- # cat 00:23:24.099 17:48:45 -- nvmf/common.sh@544 -- # jq . 00:23:24.099 17:48:45 -- nvmf/common.sh@545 -- # IFS=, 00:23:24.099 17:48:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme1", 00:23:24.099 "trtype": "tcp", 00:23:24.099 "traddr": "10.0.0.2", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "4420", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.099 "hdgst": false, 00:23:24.099 "ddgst": false 00:23:24.099 }, 00:23:24.099 "method": "bdev_nvme_attach_controller" 00:23:24.099 },{ 00:23:24.099 "params": { 00:23:24.099 "name": "Nvme2", 00:23:24.099 "trtype": "tcp", 00:23:24.099 "traddr": "10.0.0.2", 00:23:24.099 "adrfam": "ipv4", 00:23:24.099 "trsvcid": "4420", 00:23:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.099 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:24.099 "hdgst": false, 00:23:24.099 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme3", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme4", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme5", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme6", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme7", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme8", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme9", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 },{ 00:23:24.100 "params": { 00:23:24.100 "name": "Nvme10", 00:23:24.100 "trtype": "tcp", 00:23:24.100 "traddr": "10.0.0.2", 00:23:24.100 "adrfam": "ipv4", 00:23:24.100 "trsvcid": "4420", 00:23:24.100 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:24.100 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:24.100 "hdgst": false, 00:23:24.100 "ddgst": false 00:23:24.100 }, 00:23:24.100 "method": "bdev_nvme_attach_controller" 00:23:24.100 }' 00:23:24.100 [2024-07-24 17:48:45.547224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.100 [2024-07-24 17:48:45.617991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.481 17:48:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:25.481 17:48:46 -- common/autotest_common.sh@852 -- # return 0 00:23:25.481 17:48:46 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:25.481 17:48:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.481 17:48:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.481 17:48:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.481 17:48:46 -- target/shutdown.sh@83 -- # kill -9 698281 00:23:25.481 17:48:46 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:25.481 17:48:46 -- target/shutdown.sh@87 -- # sleep 1 00:23:26.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 698281 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:26.421 17:48:47 -- target/shutdown.sh@88 -- # kill -0 697992 00:23:26.421 17:48:47 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:26.421 17:48:47 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.421 17:48:47 -- nvmf/common.sh@520 -- # config=() 00:23:26.421 17:48:47 -- nvmf/common.sh@520 -- # local subsystem config 00:23:26.421 17:48:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.421 17:48:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.421 { 00:23:26.421 "params": { 00:23:26.421 "name": "Nvme$subsystem", 00:23:26.421 "trtype": "$TEST_TRANSPORT", 00:23:26.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.421 "adrfam": "ipv4", 00:23:26.421 "trsvcid": "$NVMF_PORT", 00:23:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.421 "hdgst": ${hdgst:-false}, 00:23:26.421 "ddgst": ${ddgst:-false} 00:23:26.421 }, 00:23:26.421 "method": "bdev_nvme_attach_controller" 00:23:26.421 } 00:23:26.421 EOF 00:23:26.421 )") 00:23:26.421 17:48:47 -- nvmf/common.sh@542 -- # cat 00:23:26.421 17:48:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.421 17:48:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.421 { 00:23:26.421 "params": { 00:23:26.421 "name": "Nvme$subsystem", 00:23:26.421 "trtype": "$TEST_TRANSPORT", 00:23:26.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.421 "adrfam": "ipv4", 00:23:26.421 "trsvcid": "$NVMF_PORT", 00:23:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.421 "hdgst": ${hdgst:-false}, 00:23:26.421 "ddgst": ${ddgst:-false} 00:23:26.421 }, 00:23:26.421 "method": "bdev_nvme_attach_controller" 00:23:26.421 } 00:23:26.421 EOF 00:23:26.421 )") 00:23:26.421 17:48:47 -- nvmf/common.sh@542 -- # cat 00:23:26.421 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.421 { 00:23:26.421 "params": { 00:23:26.421 "name": "Nvme$subsystem", 00:23:26.421 "trtype": "$TEST_TRANSPORT", 00:23:26.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.421 "adrfam": "ipv4", 00:23:26.421 "trsvcid": "$NVMF_PORT", 00:23:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.421 "hdgst": ${hdgst:-false}, 00:23:26.421 "ddgst": ${ddgst:-false} 00:23:26.421 }, 00:23:26.421 "method": "bdev_nvme_attach_controller" 00:23:26.421 } 00:23:26.421 EOF 00:23:26.421 )") 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.421 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.421 { 00:23:26.421 "params": { 00:23:26.421 "name": "Nvme$subsystem", 00:23:26.421 "trtype": "$TEST_TRANSPORT", 00:23:26.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.421 "adrfam": "ipv4", 00:23:26.421 "trsvcid": "$NVMF_PORT", 00:23:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.421 "hdgst": ${hdgst:-false}, 00:23:26.421 "ddgst": ${ddgst:-false} 00:23:26.421 }, 00:23:26.421 "method": "bdev_nvme_attach_controller" 00:23:26.421 } 00:23:26.421 EOF 00:23:26.421 )") 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.421 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.421 { 00:23:26.421 "params": { 00:23:26.421 "name": "Nvme$subsystem", 00:23:26.421 "trtype": "$TEST_TRANSPORT", 00:23:26.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.421 "adrfam": "ipv4", 00:23:26.421 "trsvcid": "$NVMF_PORT", 00:23:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.421 "hdgst": ${hdgst:-false}, 00:23:26.421 "ddgst": ${ddgst:-false} 00:23:26.421 }, 00:23:26.421 "method": "bdev_nvme_attach_controller" 00:23:26.421 } 00:23:26.421 EOF 00:23:26.421 )") 00:23:26.421 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.681 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.681 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.681 { 00:23:26.681 "params": { 00:23:26.681 "name": "Nvme$subsystem", 00:23:26.681 "trtype": "$TEST_TRANSPORT", 00:23:26.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.681 "adrfam": "ipv4", 00:23:26.681 "trsvcid": "$NVMF_PORT", 00:23:26.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.681 "hdgst": ${hdgst:-false}, 00:23:26.681 "ddgst": ${ddgst:-false} 00:23:26.681 }, 00:23:26.681 "method": "bdev_nvme_attach_controller" 00:23:26.682 } 00:23:26.682 EOF 00:23:26.682 )") 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.682 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.682 { 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme$subsystem", 00:23:26.682 "trtype": "$TEST_TRANSPORT", 00:23:26.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "$NVMF_PORT", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.682 "hdgst": ${hdgst:-false}, 00:23:26.682 "ddgst": ${ddgst:-false} 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 } 00:23:26.682 EOF 00:23:26.682 )") 00:23:26.682 [2024-07-24 17:48:48.029458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:26.682 [2024-07-24 17:48:48.029503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698771 ] 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.682 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.682 { 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme$subsystem", 00:23:26.682 "trtype": "$TEST_TRANSPORT", 00:23:26.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "$NVMF_PORT", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.682 "hdgst": ${hdgst:-false}, 00:23:26.682 "ddgst": ${ddgst:-false} 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 } 00:23:26.682 EOF 00:23:26.682 )") 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.682 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.682 { 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme$subsystem", 00:23:26.682 "trtype": "$TEST_TRANSPORT", 00:23:26.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "$NVMF_PORT", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.682 "hdgst": ${hdgst:-false}, 00:23:26.682 "ddgst": ${ddgst:-false} 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 } 00:23:26.682 EOF 00:23:26.682 )") 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.682 17:48:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.682 { 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme$subsystem", 00:23:26.682 "trtype": "$TEST_TRANSPORT", 00:23:26.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "$NVMF_PORT", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.682 "hdgst": ${hdgst:-false}, 00:23:26.682 "ddgst": ${ddgst:-false} 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 } 00:23:26.682 EOF 00:23:26.682 )") 00:23:26.682 17:48:48 -- nvmf/common.sh@542 -- # cat 00:23:26.682 17:48:48 -- nvmf/common.sh@544 -- # jq . 00:23:26.682 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.682 17:48:48 -- nvmf/common.sh@545 -- # IFS=, 00:23:26.682 17:48:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme1", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme2", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme3", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme4", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme5", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme6", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme7", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme8", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme9", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 },{ 00:23:26.682 "params": { 00:23:26.682 "name": "Nvme10", 00:23:26.682 "trtype": "tcp", 00:23:26.682 "traddr": "10.0.0.2", 00:23:26.682 "adrfam": "ipv4", 00:23:26.682 "trsvcid": "4420", 00:23:26.682 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.682 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.682 "hdgst": false, 00:23:26.682 "ddgst": false 00:23:26.682 }, 00:23:26.682 "method": "bdev_nvme_attach_controller" 00:23:26.682 }' 00:23:26.682 [2024-07-24 17:48:48.087502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.682 [2024-07-24 17:48:48.159590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.591 Running I/O for 1 seconds... 00:23:29.530 00:23:29.530 Latency(us) 00:23:29.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.530 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme1n1 : 1.09 399.57 24.97 0.00 0.00 157529.84 11967.44 150447.86 00:23:29.530 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme2n1 : 1.07 498.07 31.13 0.00 0.00 124778.84 10542.75 108048.92 00:23:29.530 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme3n1 : 1.09 480.93 30.06 0.00 0.00 129650.42 13392.14 112152.04 00:23:29.530 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme4n1 : 1.06 407.51 25.47 0.00 0.00 148992.18 40119.43 121270.09 00:23:29.530 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme5n1 : 1.09 486.81 30.43 0.00 0.00 125762.05 4074.63 112152.04 00:23:29.530 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.530 Verification LBA range: start 0x0 length 0x400 00:23:29.530 Nvme6n1 : 1.09 480.59 30.04 0.00 0.00 127072.26 16070.57 107137.11 00:23:29.530 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.531 Verification LBA range: start 0x0 length 0x400 00:23:29.531 Nvme7n1 : 1.14 459.32 28.71 0.00 0.00 127899.79 14246.96 114887.46 00:23:29.531 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.531 Verification LBA range: start 0x0 length 0x400 00:23:29.531 Nvme8n1 : 1.14 459.03 28.69 0.00 0.00 127277.23 12765.27 111696.14 00:23:29.531 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.531 Verification LBA range: start 0x0 length 0x400 00:23:29.531 Nvme9n1 : 1.14 464.90 29.06 0.00 0.00 125138.59 9232.03 108960.72 00:23:29.531 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.531 Verification LBA range: start 0x0 length 0x400 00:23:29.531 Nvme10n1 : 1.10 478.81 29.93 0.00 0.00 125363.59 6069.20 115799.26 00:23:29.531 =================================================================================================================== 00:23:29.531 Total : 4615.55 288.47 0.00 0.00 131174.00 4074.63 150447.86 00:23:29.531 17:48:51 -- target/shutdown.sh@93 -- # stoptarget 00:23:29.531 17:48:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:29.531 17:48:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:29.531 17:48:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:29.531 17:48:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:29.531 17:48:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:29.531 17:48:51 -- nvmf/common.sh@116 -- # sync 00:23:29.531 17:48:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:29.531 17:48:51 -- nvmf/common.sh@119 -- # set +e 00:23:29.531 17:48:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:29.531 17:48:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:29.531 rmmod nvme_tcp 00:23:29.790 rmmod nvme_fabrics 00:23:29.790 rmmod nvme_keyring 00:23:29.790 17:48:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:29.790 17:48:51 -- nvmf/common.sh@123 -- # set -e 00:23:29.790 17:48:51 -- nvmf/common.sh@124 -- # return 0 00:23:29.790 17:48:51 -- nvmf/common.sh@477 -- # '[' -n 697992 ']' 00:23:29.790 17:48:51 -- nvmf/common.sh@478 -- # killprocess 697992 00:23:29.790 17:48:51 -- common/autotest_common.sh@926 -- # '[' -z 697992 ']' 00:23:29.790 17:48:51 -- common/autotest_common.sh@930 -- # kill -0 697992 00:23:29.790 17:48:51 -- common/autotest_common.sh@931 -- # uname 00:23:29.790 17:48:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:29.790 17:48:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 697992 00:23:29.790 17:48:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:29.790 17:48:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:29.790 17:48:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 697992' 00:23:29.790 killing process with pid 697992 00:23:29.790 17:48:51 -- common/autotest_common.sh@945 -- # kill 697992 00:23:29.790 17:48:51 -- common/autotest_common.sh@950 -- # wait 697992 00:23:30.049 17:48:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:30.049 17:48:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:30.049 17:48:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:30.049 17:48:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.049 17:48:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:30.049 17:48:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.049 17:48:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.049 17:48:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.586 17:48:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:32.587 00:23:32.587 real 0m15.254s 00:23:32.587 user 0m35.256s 00:23:32.587 sys 0m5.544s 00:23:32.587 17:48:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.587 17:48:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 ************************************ 00:23:32.587 END TEST nvmf_shutdown_tc1 00:23:32.587 ************************************ 00:23:32.587 17:48:53 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:32.587 17:48:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:32.587 17:48:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:32.587 17:48:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 ************************************ 00:23:32.587 START TEST nvmf_shutdown_tc2 00:23:32.587 ************************************ 00:23:32.587 17:48:53 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:23:32.587 17:48:53 -- target/shutdown.sh@98 -- # starttarget 00:23:32.587 17:48:53 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:32.587 17:48:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:32.587 17:48:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.587 17:48:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:32.587 17:48:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:32.587 17:48:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:32.587 17:48:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.587 17:48:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.587 17:48:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.587 17:48:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:32.587 17:48:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:32.587 17:48:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 17:48:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.587 17:48:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:32.587 17:48:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:32.587 17:48:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:32.587 17:48:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:32.587 17:48:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:32.587 17:48:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:32.587 17:48:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:32.587 17:48:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:32.587 17:48:53 -- nvmf/common.sh@295 -- # e810=() 00:23:32.587 17:48:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:32.587 17:48:53 -- nvmf/common.sh@296 -- # x722=() 00:23:32.587 17:48:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:32.587 17:48:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:32.587 17:48:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:32.587 17:48:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.587 17:48:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:32.587 17:48:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:32.587 17:48:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:48:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.587 17:48:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:48:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.587 17:48:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:48:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.587 17:48:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.587 17:48:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.587 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.587 17:48:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.587 17:48:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.587 17:48:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.587 17:48:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.587 17:48:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.587 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.587 17:48:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.587 17:48:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:32.587 17:48:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:32.587 17:48:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:32.587 17:48:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.587 17:48:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.587 17:48:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.587 17:48:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:32.587 17:48:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.587 17:48:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.587 17:48:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:32.587 17:48:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.587 17:48:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.587 17:48:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:32.587 17:48:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:32.587 17:48:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.587 17:48:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.587 17:48:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.587 17:48:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.587 17:48:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:32.587 17:48:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.587 17:48:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.587 17:48:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.587 17:48:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:32.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:23:32.587 00:23:32.587 --- 10.0.0.2 ping statistics --- 00:23:32.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.587 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:32.587 17:48:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:32.587 00:23:32.587 --- 10.0.0.1 ping statistics --- 00:23:32.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.587 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:32.587 17:48:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.587 17:48:54 -- nvmf/common.sh@410 -- # return 0 00:23:32.587 17:48:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:32.587 17:48:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.587 17:48:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:32.587 17:48:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:32.587 17:48:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.587 17:48:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:32.587 17:48:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:32.587 17:48:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:32.587 17:48:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:32.587 17:48:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:32.587 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 17:48:54 -- nvmf/common.sh@469 -- # nvmfpid=699831 00:23:32.587 17:48:54 -- nvmf/common.sh@470 -- # waitforlisten 699831 00:23:32.587 17:48:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.587 17:48:54 -- common/autotest_common.sh@819 -- # '[' -z 699831 ']' 00:23:32.587 17:48:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.587 17:48:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:32.587 17:48:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.587 17:48:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:32.587 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.587 [2024-07-24 17:48:54.104841] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:32.587 [2024-07-24 17:48:54.104888] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.587 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.587 [2024-07-24 17:48:54.165273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.846 [2024-07-24 17:48:54.243519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:32.846 [2024-07-24 17:48:54.243631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.846 [2024-07-24 17:48:54.243638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.846 [2024-07-24 17:48:54.243644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.846 [2024-07-24 17:48:54.243737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.846 [2024-07-24 17:48:54.243821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.846 [2024-07-24 17:48:54.243854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.846 [2024-07-24 17:48:54.243855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:33.416 17:48:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:33.416 17:48:54 -- common/autotest_common.sh@852 -- # return 0 00:23:33.416 17:48:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:33.416 17:48:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:33.416 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 17:48:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.416 17:48:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.416 17:48:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.416 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 [2024-07-24 17:48:54.944316] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.416 17:48:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.416 17:48:54 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:33.416 17:48:54 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:33.416 17:48:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:33.416 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 17:48:54 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.416 17:48:54 -- target/shutdown.sh@28 -- # cat 00:23:33.416 17:48:54 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:33.416 17:48:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.416 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.709 Malloc1 00:23:33.709 [2024-07-24 17:48:55.035996] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.709 Malloc2 00:23:33.709 Malloc3 00:23:33.709 Malloc4 00:23:33.709 Malloc5 00:23:33.709 Malloc6 00:23:33.709 Malloc7 00:23:33.978 Malloc8 00:23:33.978 Malloc9 00:23:33.978 Malloc10 00:23:33.978 17:48:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.978 17:48:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:33.978 17:48:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:33.978 17:48:55 -- common/autotest_common.sh@10 -- # set +x 00:23:33.978 17:48:55 -- target/shutdown.sh@102 -- # perfpid=700111 00:23:33.978 17:48:55 -- target/shutdown.sh@103 -- # waitforlisten 700111 /var/tmp/bdevperf.sock 00:23:33.978 17:48:55 -- common/autotest_common.sh@819 -- # '[' -z 700111 ']' 00:23:33.978 17:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.978 17:48:55 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:33.978 17:48:55 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.978 17:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:33.978 17:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.978 17:48:55 -- nvmf/common.sh@520 -- # config=() 00:23:33.978 17:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:33.978 17:48:55 -- nvmf/common.sh@520 -- # local subsystem config 00:23:33.978 17:48:55 -- common/autotest_common.sh@10 -- # set +x 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 [2024-07-24 17:48:55.507646] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:33.978 [2024-07-24 17:48:55.507698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700111 ] 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.978 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.978 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.978 { 00:23:33.978 "params": { 00:23:33.978 "name": "Nvme$subsystem", 00:23:33.978 "trtype": "$TEST_TRANSPORT", 00:23:33.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.978 "adrfam": "ipv4", 00:23:33.978 "trsvcid": "$NVMF_PORT", 00:23:33.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.978 "hdgst": ${hdgst:-false}, 00:23:33.978 "ddgst": ${ddgst:-false} 00:23:33.978 }, 00:23:33.978 "method": "bdev_nvme_attach_controller" 00:23:33.978 } 00:23:33.978 EOF 00:23:33.978 )") 00:23:33.979 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.979 17:48:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.979 17:48:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.979 { 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme$subsystem", 00:23:33.979 "trtype": "$TEST_TRANSPORT", 00:23:33.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "$NVMF_PORT", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.979 "hdgst": ${hdgst:-false}, 00:23:33.979 "ddgst": ${ddgst:-false} 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 } 00:23:33.979 EOF 00:23:33.979 )") 00:23:33.979 17:48:55 -- nvmf/common.sh@542 -- # cat 00:23:33.979 17:48:55 -- nvmf/common.sh@544 -- # jq . 00:23:33.979 17:48:55 -- nvmf/common.sh@545 -- # IFS=, 00:23:33.979 17:48:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme1", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme2", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme3", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme4", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme5", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme6", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme7", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme8", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme9", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 },{ 00:23:33.979 "params": { 00:23:33.979 "name": "Nvme10", 00:23:33.979 "trtype": "tcp", 00:23:33.979 "traddr": "10.0.0.2", 00:23:33.979 "adrfam": "ipv4", 00:23:33.979 "trsvcid": "4420", 00:23:33.979 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.979 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.979 "hdgst": false, 00:23:33.979 "ddgst": false 00:23:33.979 }, 00:23:33.979 "method": "bdev_nvme_attach_controller" 00:23:33.979 }' 00:23:33.979 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.979 [2024-07-24 17:48:55.564279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.237 [2024-07-24 17:48:55.635891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.139 Running I/O for 10 seconds... 00:23:36.139 17:48:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:36.139 17:48:57 -- common/autotest_common.sh@852 -- # return 0 00:23:36.139 17:48:57 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:36.139 17:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.139 17:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:36.139 17:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.139 17:48:57 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:36.139 17:48:57 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:36.139 17:48:57 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:36.139 17:48:57 -- target/shutdown.sh@57 -- # local ret=1 00:23:36.139 17:48:57 -- target/shutdown.sh@58 -- # local i 00:23:36.139 17:48:57 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:36.139 17:48:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:36.139 17:48:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:36.139 17:48:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:36.139 17:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.139 17:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:36.139 17:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.398 17:48:57 -- target/shutdown.sh@60 -- # read_io_count=167 00:23:36.398 17:48:57 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:23:36.398 17:48:57 -- target/shutdown.sh@64 -- # ret=0 00:23:36.398 17:48:57 -- target/shutdown.sh@65 -- # break 00:23:36.398 17:48:57 -- target/shutdown.sh@69 -- # return 0 00:23:36.398 17:48:57 -- target/shutdown.sh@109 -- # killprocess 700111 00:23:36.399 17:48:57 -- common/autotest_common.sh@926 -- # '[' -z 700111 ']' 00:23:36.399 17:48:57 -- common/autotest_common.sh@930 -- # kill -0 700111 00:23:36.399 17:48:57 -- common/autotest_common.sh@931 -- # uname 00:23:36.399 17:48:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:36.399 17:48:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 700111 00:23:36.399 17:48:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:36.399 17:48:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:36.399 17:48:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 700111' 00:23:36.399 killing process with pid 700111 00:23:36.399 17:48:57 -- common/autotest_common.sh@945 -- # kill 700111 00:23:36.399 17:48:57 -- common/autotest_common.sh@950 -- # wait 700111 00:23:36.399 Received shutdown signal, test time was about 0.544750 seconds 00:23:36.399 00:23:36.399 Latency(us) 00:23:36.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.399 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme1n1 : 0.54 500.82 31.30 0.00 0.00 115198.47 14930.81 112152.04 00:23:36.399 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme2n1 : 0.54 499.92 31.25 0.00 0.00 114557.47 11682.50 110784.33 00:23:36.399 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme3n1 : 0.49 564.26 35.27 0.00 0.00 107255.22 5670.29 110328.43 00:23:36.399 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme4n1 : 0.50 380.33 23.77 0.00 0.00 156451.38 18122.13 143153.42 00:23:36.399 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme5n1 : 0.51 450.81 28.18 0.00 0.00 131009.26 9972.87 116255.17 00:23:36.399 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme6n1 : 0.48 478.46 29.90 0.00 0.00 121379.57 9402.99 97107.26 00:23:36.399 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme7n1 : 0.48 473.58 29.60 0.00 0.00 120689.14 9915.88 96651.35 00:23:36.399 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme8n1 : 0.51 447.71 27.98 0.00 0.00 126505.48 11454.55 108504.82 00:23:36.399 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme9n1 : 0.53 355.70 22.23 0.00 0.00 142247.01 31457.28 113519.75 00:23:36.399 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.399 Verification LBA range: start 0x0 length 0x400 00:23:36.399 Nvme10n1 : 0.50 377.19 23.57 0.00 0.00 145203.48 12936.24 124005.51 00:23:36.399 =================================================================================================================== 00:23:36.399 Total : 4528.77 283.05 0.00 0.00 126103.99 5670.29 143153.42 00:23:36.658 17:48:58 -- target/shutdown.sh@112 -- # sleep 1 00:23:37.594 17:48:59 -- target/shutdown.sh@113 -- # kill -0 699831 00:23:37.594 17:48:59 -- target/shutdown.sh@115 -- # stoptarget 00:23:37.594 17:48:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:37.594 17:48:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:37.594 17:48:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:37.594 17:48:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:37.594 17:48:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:37.594 17:48:59 -- nvmf/common.sh@116 -- # sync 00:23:37.594 17:48:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:37.594 17:48:59 -- nvmf/common.sh@119 -- # set +e 00:23:37.594 17:48:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:37.594 17:48:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:37.594 rmmod nvme_tcp 00:23:37.594 rmmod nvme_fabrics 00:23:37.853 rmmod nvme_keyring 00:23:37.853 17:48:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:37.853 17:48:59 -- nvmf/common.sh@123 -- # set -e 00:23:37.853 17:48:59 -- nvmf/common.sh@124 -- # return 0 00:23:37.853 17:48:59 -- nvmf/common.sh@477 -- # '[' -n 699831 ']' 00:23:37.853 17:48:59 -- nvmf/common.sh@478 -- # killprocess 699831 00:23:37.853 17:48:59 -- common/autotest_common.sh@926 -- # '[' -z 699831 ']' 00:23:37.853 17:48:59 -- common/autotest_common.sh@930 -- # kill -0 699831 00:23:37.853 17:48:59 -- common/autotest_common.sh@931 -- # uname 00:23:37.853 17:48:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:37.853 17:48:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 699831 00:23:37.853 17:48:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:37.853 17:48:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:37.853 17:48:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 699831' 00:23:37.853 killing process with pid 699831 00:23:37.853 17:48:59 -- common/autotest_common.sh@945 -- # kill 699831 00:23:37.853 17:48:59 -- common/autotest_common.sh@950 -- # wait 699831 00:23:38.113 17:48:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:38.113 17:48:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:38.113 17:48:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:38.113 17:48:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.113 17:48:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:38.113 17:48:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.113 17:48:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.113 17:48:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.653 17:49:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:40.653 00:23:40.653 real 0m8.013s 00:23:40.653 user 0m24.453s 00:23:40.653 sys 0m1.281s 00:23:40.653 17:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.653 17:49:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.653 ************************************ 00:23:40.654 END TEST nvmf_shutdown_tc2 00:23:40.654 ************************************ 00:23:40.654 17:49:01 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:40.654 17:49:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:40.654 17:49:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:40.654 17:49:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.654 ************************************ 00:23:40.654 START TEST nvmf_shutdown_tc3 00:23:40.654 ************************************ 00:23:40.654 17:49:01 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:23:40.654 17:49:01 -- target/shutdown.sh@120 -- # starttarget 00:23:40.654 17:49:01 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:40.654 17:49:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:40.654 17:49:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.654 17:49:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:40.654 17:49:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:40.654 17:49:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:40.654 17:49:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.654 17:49:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.654 17:49:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.654 17:49:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:40.654 17:49:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:40.654 17:49:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.654 17:49:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:40.654 17:49:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:40.654 17:49:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:40.654 17:49:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:40.654 17:49:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:40.654 17:49:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:40.654 17:49:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:40.654 17:49:01 -- nvmf/common.sh@294 -- # net_devs=() 00:23:40.654 17:49:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:40.654 17:49:01 -- nvmf/common.sh@295 -- # e810=() 00:23:40.654 17:49:01 -- nvmf/common.sh@295 -- # local -ga e810 00:23:40.654 17:49:01 -- nvmf/common.sh@296 -- # x722=() 00:23:40.654 17:49:01 -- nvmf/common.sh@296 -- # local -ga x722 00:23:40.654 17:49:01 -- nvmf/common.sh@297 -- # mlx=() 00:23:40.654 17:49:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:40.654 17:49:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.654 17:49:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:40.654 17:49:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:40.654 17:49:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:40.654 17:49:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:40.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:40.654 17:49:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:40.654 17:49:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:40.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:40.654 17:49:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:40.654 17:49:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.654 17:49:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.654 17:49:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:40.654 Found net devices under 0000:86:00.0: cvl_0_0 00:23:40.654 17:49:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.654 17:49:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:40.654 17:49:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.654 17:49:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.654 17:49:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:40.654 Found net devices under 0000:86:00.1: cvl_0_1 00:23:40.654 17:49:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.654 17:49:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:40.654 17:49:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:40.654 17:49:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:40.654 17:49:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.654 17:49:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.654 17:49:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.654 17:49:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:40.654 17:49:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.654 17:49:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.654 17:49:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:40.654 17:49:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.654 17:49:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.654 17:49:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:40.654 17:49:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:40.654 17:49:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.654 17:49:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.654 17:49:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.654 17:49:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.654 17:49:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:40.654 17:49:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.654 17:49:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.654 17:49:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.654 17:49:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:40.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:23:40.655 00:23:40.655 --- 10.0.0.2 ping statistics --- 00:23:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.655 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:40.655 17:49:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:23:40.655 00:23:40.655 --- 10.0.0.1 ping statistics --- 00:23:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.655 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:23:40.655 17:49:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.655 17:49:02 -- nvmf/common.sh@410 -- # return 0 00:23:40.655 17:49:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:40.655 17:49:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.655 17:49:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:40.655 17:49:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:40.655 17:49:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.655 17:49:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:40.655 17:49:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:40.655 17:49:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:40.655 17:49:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:40.655 17:49:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:40.655 17:49:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.655 17:49:02 -- nvmf/common.sh@469 -- # nvmfpid=701386 00:23:40.655 17:49:02 -- nvmf/common.sh@470 -- # waitforlisten 701386 00:23:40.655 17:49:02 -- common/autotest_common.sh@819 -- # '[' -z 701386 ']' 00:23:40.655 17:49:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.655 17:49:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:40.655 17:49:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.655 17:49:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:40.655 17:49:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:40.655 17:49:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.655 [2024-07-24 17:49:02.118382] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:40.655 [2024-07-24 17:49:02.118424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.655 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.655 [2024-07-24 17:49:02.171125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.655 [2024-07-24 17:49:02.249557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:40.655 [2024-07-24 17:49:02.249662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.655 [2024-07-24 17:49:02.249670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.655 [2024-07-24 17:49:02.249676] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.655 [2024-07-24 17:49:02.249709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.655 [2024-07-24 17:49:02.249795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.655 [2024-07-24 17:49:02.249831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.655 [2024-07-24 17:49:02.249832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.592 17:49:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:41.592 17:49:02 -- common/autotest_common.sh@852 -- # return 0 00:23:41.592 17:49:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:41.592 17:49:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:41.592 17:49:02 -- common/autotest_common.sh@10 -- # set +x 00:23:41.592 17:49:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.592 17:49:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.592 17:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.592 17:49:02 -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 [2024-07-24 17:49:02.973381] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.593 17:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.593 17:49:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:41.593 17:49:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:41.593 17:49:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:41.593 17:49:02 -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 17:49:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.593 17:49:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:02 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:02 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:02 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.593 17:49:03 -- target/shutdown.sh@28 -- # cat 00:23:41.593 17:49:03 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:41.593 17:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.593 17:49:03 -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 Malloc1 00:23:41.593 [2024-07-24 17:49:03.069421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.593 Malloc2 00:23:41.593 Malloc3 00:23:41.593 Malloc4 00:23:41.858 Malloc5 00:23:41.858 Malloc6 00:23:41.858 Malloc7 00:23:41.858 Malloc8 00:23:41.858 Malloc9 00:23:41.858 Malloc10 00:23:42.120 17:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:42.120 17:49:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:42.120 17:49:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:42.120 17:49:03 -- common/autotest_common.sh@10 -- # set +x 00:23:42.120 17:49:03 -- target/shutdown.sh@124 -- # perfpid=701666 00:23:42.120 17:49:03 -- target/shutdown.sh@125 -- # waitforlisten 701666 /var/tmp/bdevperf.sock 00:23:42.120 17:49:03 -- common/autotest_common.sh@819 -- # '[' -z 701666 ']' 00:23:42.120 17:49:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.120 17:49:03 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:42.120 17:49:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:42.120 17:49:03 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:42.120 17:49:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.120 17:49:03 -- nvmf/common.sh@520 -- # config=() 00:23:42.120 17:49:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:42.120 17:49:03 -- nvmf/common.sh@520 -- # local subsystem config 00:23:42.120 17:49:03 -- common/autotest_common.sh@10 -- # set +x 00:23:42.120 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.120 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.120 { 00:23:42.120 "params": { 00:23:42.120 "name": "Nvme$subsystem", 00:23:42.120 "trtype": "$TEST_TRANSPORT", 00:23:42.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.120 "adrfam": "ipv4", 00:23:42.120 "trsvcid": "$NVMF_PORT", 00:23:42.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.120 "hdgst": ${hdgst:-false}, 00:23:42.120 "ddgst": ${ddgst:-false} 00:23:42.120 }, 00:23:42.120 "method": "bdev_nvme_attach_controller" 00:23:42.120 } 00:23:42.120 EOF 00:23:42.120 )") 00:23:42.120 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.120 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.120 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.120 { 00:23:42.120 "params": { 00:23:42.120 "name": "Nvme$subsystem", 00:23:42.120 "trtype": "$TEST_TRANSPORT", 00:23:42.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.120 "adrfam": "ipv4", 00:23:42.120 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 [2024-07-24 17:49:03.537610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:42.121 [2024-07-24 17:49:03.537658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701666 ] 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 17:49:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:42.121 { 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme$subsystem", 00:23:42.121 "trtype": "$TEST_TRANSPORT", 00:23:42.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "$NVMF_PORT", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.121 "hdgst": ${hdgst:-false}, 00:23:42.121 "ddgst": ${ddgst:-false} 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 } 00:23:42.121 EOF 00:23:42.121 )") 00:23:42.121 17:49:03 -- nvmf/common.sh@542 -- # cat 00:23:42.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.121 17:49:03 -- nvmf/common.sh@544 -- # jq . 00:23:42.121 17:49:03 -- nvmf/common.sh@545 -- # IFS=, 00:23:42.121 17:49:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme1", 00:23:42.121 "trtype": "tcp", 00:23:42.121 "traddr": "10.0.0.2", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "4420", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.121 "hdgst": false, 00:23:42.121 "ddgst": false 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 },{ 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme2", 00:23:42.121 "trtype": "tcp", 00:23:42.121 "traddr": "10.0.0.2", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "4420", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:42.121 "hdgst": false, 00:23:42.121 "ddgst": false 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 },{ 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme3", 00:23:42.121 "trtype": "tcp", 00:23:42.121 "traddr": "10.0.0.2", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "4420", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:42.121 "hdgst": false, 00:23:42.121 "ddgst": false 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 },{ 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme4", 00:23:42.121 "trtype": "tcp", 00:23:42.121 "traddr": "10.0.0.2", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "4420", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:42.121 "hdgst": false, 00:23:42.121 "ddgst": false 00:23:42.121 }, 00:23:42.121 "method": "bdev_nvme_attach_controller" 00:23:42.121 },{ 00:23:42.121 "params": { 00:23:42.121 "name": "Nvme5", 00:23:42.121 "trtype": "tcp", 00:23:42.121 "traddr": "10.0.0.2", 00:23:42.121 "adrfam": "ipv4", 00:23:42.121 "trsvcid": "4420", 00:23:42.121 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:42.121 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 },{ 00:23:42.122 "params": { 00:23:42.122 "name": "Nvme6", 00:23:42.122 "trtype": "tcp", 00:23:42.122 "traddr": "10.0.0.2", 00:23:42.122 "adrfam": "ipv4", 00:23:42.122 "trsvcid": "4420", 00:23:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:42.122 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 },{ 00:23:42.122 "params": { 00:23:42.122 "name": "Nvme7", 00:23:42.122 "trtype": "tcp", 00:23:42.122 "traddr": "10.0.0.2", 00:23:42.122 "adrfam": "ipv4", 00:23:42.122 "trsvcid": "4420", 00:23:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:42.122 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 },{ 00:23:42.122 "params": { 00:23:42.122 "name": "Nvme8", 00:23:42.122 "trtype": "tcp", 00:23:42.122 "traddr": "10.0.0.2", 00:23:42.122 "adrfam": "ipv4", 00:23:42.122 "trsvcid": "4420", 00:23:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:42.122 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 },{ 00:23:42.122 "params": { 00:23:42.122 "name": "Nvme9", 00:23:42.122 "trtype": "tcp", 00:23:42.122 "traddr": "10.0.0.2", 00:23:42.122 "adrfam": "ipv4", 00:23:42.122 "trsvcid": "4420", 00:23:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:42.122 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 },{ 00:23:42.122 "params": { 00:23:42.122 "name": "Nvme10", 00:23:42.122 "trtype": "tcp", 00:23:42.122 "traddr": "10.0.0.2", 00:23:42.122 "adrfam": "ipv4", 00:23:42.122 "trsvcid": "4420", 00:23:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:42.122 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:42.122 "hdgst": false, 00:23:42.122 "ddgst": false 00:23:42.122 }, 00:23:42.122 "method": "bdev_nvme_attach_controller" 00:23:42.122 }' 00:23:42.122 [2024-07-24 17:49:03.593299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.122 [2024-07-24 17:49:03.664400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.498 Running I/O for 10 seconds... 00:23:44.449 17:49:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:44.449 17:49:05 -- common/autotest_common.sh@852 -- # return 0 00:23:44.449 17:49:05 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:44.449 17:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.449 17:49:05 -- common/autotest_common.sh@10 -- # set +x 00:23:44.449 17:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.449 17:49:05 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.449 17:49:05 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:44.449 17:49:05 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:44.449 17:49:05 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:44.449 17:49:05 -- target/shutdown.sh@57 -- # local ret=1 00:23:44.449 17:49:05 -- target/shutdown.sh@58 -- # local i 00:23:44.449 17:49:05 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:44.449 17:49:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:44.449 17:49:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:44.449 17:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.449 17:49:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:44.449 17:49:05 -- common/autotest_common.sh@10 -- # set +x 00:23:44.449 17:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.449 17:49:05 -- target/shutdown.sh@60 -- # read_io_count=254 00:23:44.449 17:49:05 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:23:44.449 17:49:05 -- target/shutdown.sh@64 -- # ret=0 00:23:44.449 17:49:05 -- target/shutdown.sh@65 -- # break 00:23:44.449 17:49:05 -- target/shutdown.sh@69 -- # return 0 00:23:44.449 17:49:05 -- target/shutdown.sh@134 -- # killprocess 701386 00:23:44.449 17:49:05 -- common/autotest_common.sh@926 -- # '[' -z 701386 ']' 00:23:44.449 17:49:05 -- common/autotest_common.sh@930 -- # kill -0 701386 00:23:44.449 17:49:05 -- common/autotest_common.sh@931 -- # uname 00:23:44.449 17:49:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:44.449 17:49:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 701386 00:23:44.449 17:49:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:44.449 17:49:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:44.449 17:49:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 701386' 00:23:44.449 killing process with pid 701386 00:23:44.449 17:49:05 -- common/autotest_common.sh@945 -- # kill 701386 00:23:44.449 17:49:05 -- common/autotest_common.sh@950 -- # wait 701386 00:23:44.449 [2024-07-24 17:49:05.789889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.789997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.449 [2024-07-24 17:49:05.790041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.790319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7430 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.450 [2024-07-24 17:49:05.791547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.791793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9de0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.792827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b78e0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.792837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b78e0 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.793995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.451 [2024-07-24 17:49:05.794082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.794177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b7d90 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.795457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8220 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.452 [2024-07-24 17:49:05.796357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.796526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b86d0 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.453 [2024-07-24 17:49:05.797759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.797860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8b80 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.798995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.454 [2024-07-24 17:49:05.799184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9030 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x791240 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826120 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783e40 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e250 is same with the state(5) to be set 00:23:44.455 [2024-07-24 17:49:05.799984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.455 [2024-07-24 17:49:05.799992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.455 [2024-07-24 17:49:05.799999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762710 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c6660 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 17:49:05.800164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:44.456 the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 17:49:05.800173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.456 [2024-07-24 17:49:05.800220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.456 [2024-07-24 17:49:05.800227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d9e0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.456 [2024-07-24 17:49:05.800461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.800546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b94c0 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b9950 is same with the state(5) to be set 00:23:44.457 [2024-07-24 17:49:05.801873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.801989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.801996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.457 [2024-07-24 17:49:05.802118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.457 [2024-07-24 17:49:05.802124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.458 [2024-07-24 17:49:05.802656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.458 [2024-07-24 17:49:05.802664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.802930] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7f4cc0 was disconnected and freed. reset controller. 00:23:44.459 [2024-07-24 17:49:05.803026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.459 [2024-07-24 17:49:05.803293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.459 [2024-07-24 17:49:05.803301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.803611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.803618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.460 [2024-07-24 17:49:05.814966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.460 [2024-07-24 17:49:05.814977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.814986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.814997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.815194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815294] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ad650 was disconnected and freed. reset controller. 00:23:44.461 [2024-07-24 17:49:05.815774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x791240 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x826120 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783e40 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x826a20 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e250 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762710 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6660 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.815937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.815949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.815968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.815987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.815997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.816006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82d950 is same with the state(5) to be set 00:23:44.461 [2024-07-24 17:49:05.816059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.816070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.816089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.816111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.461 [2024-07-24 17:49:05.816129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765470 is same with the state(5) to be set 00:23:44.461 [2024-07-24 17:49:05.816158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d9e0 (9): Bad file descriptor 00:23:44.461 [2024-07-24 17:49:05.816270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.461 [2024-07-24 17:49:05.816512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.461 [2024-07-24 17:49:05.816523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.816984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.816995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.462 [2024-07-24 17:49:05.817244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.462 [2024-07-24 17:49:05.817257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.817576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.817650] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x752ae0 was disconnected and freed. reset controller. 00:23:44.463 [2024-07-24 17:49:05.820460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:44.463 [2024-07-24 17:49:05.822856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:44.463 [2024-07-24 17:49:05.822891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:44.463 [2024-07-24 17:49:05.823548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.824018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.824033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x791240 with addr=10.0.0.2, port=4420 00:23:44.463 [2024-07-24 17:49:05.824048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x791240 is same with the state(5) to be set 00:23:44.463 [2024-07-24 17:49:05.825448] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.463 [2024-07-24 17:49:05.825837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.826204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.826219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x783e40 with addr=10.0.0.2, port=4420 00:23:44.463 [2024-07-24 17:49:05.826230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783e40 is same with the state(5) to be set 00:23:44.463 [2024-07-24 17:49:05.826572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.826965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.463 [2024-07-24 17:49:05.826979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c6660 with addr=10.0.0.2, port=4420 00:23:44.463 [2024-07-24 17:49:05.826989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c6660 is same with the state(5) to be set 00:23:44.463 [2024-07-24 17:49:05.827004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x791240 (9): Bad file descriptor 00:23:44.463 [2024-07-24 17:49:05.827054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.463 [2024-07-24 17:49:05.827336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.463 [2024-07-24 17:49:05.827348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.827986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.827997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.828007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.464 [2024-07-24 17:49:05.828018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.464 [2024-07-24 17:49:05.828028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828520] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f3cc0 was disconnected and freed. reset controller. 00:23:44.465 [2024-07-24 17:49:05.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.828990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.828999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.465 [2024-07-24 17:49:05.829196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.465 [2024-07-24 17:49:05.829208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac070 is same with the state(5) to be set 00:23:44.466 [2024-07-24 17:49:05.829452] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ac070 was disconnected and freed. reset controller. 00:23:44.466 [2024-07-24 17:49:05.829520] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.466 [2024-07-24 17:49:05.829572] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.466 [2024-07-24 17:49:05.829622] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.466 [2024-07-24 17:49:05.829656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.829981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.829990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.466 [2024-07-24 17:49:05.830233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.466 [2024-07-24 17:49:05.830245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.830987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.830998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.467 [2024-07-24 17:49:05.831007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.467 [2024-07-24 17:49:05.831018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca9b0 is same with the state(5) to be set 00:23:44.467 [2024-07-24 17:49:05.831083] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ca9b0 was disconnected and freed. reset controller. 00:23:44.467 [2024-07-24 17:49:05.831137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783e40 (9): Bad file descriptor 00:23:44.467 [2024-07-24 17:49:05.831151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6660 (9): Bad file descriptor 00:23:44.467 [2024-07-24 17:49:05.831162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:44.467 [2024-07-24 17:49:05.831172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:44.468 [2024-07-24 17:49:05.831182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:44.468 [2024-07-24 17:49:05.831218] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.468 [2024-07-24 17:49:05.831233] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.468 [2024-07-24 17:49:05.831248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82d950 (9): Bad file descriptor 00:23:44.468 [2024-07-24 17:49:05.831269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765470 (9): Bad file descriptor 00:23:44.468 [2024-07-24 17:49:05.831302] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.468 [2024-07-24 17:49:05.834525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.468 [2024-07-24 17:49:05.834559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.468 [2024-07-24 17:49:05.834571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:44.468 [2024-07-24 17:49:05.834594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:44.468 [2024-07-24 17:49:05.834603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:44.468 [2024-07-24 17:49:05.834617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:44.468 [2024-07-24 17:49:05.834630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:44.468 [2024-07-24 17:49:05.834637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:44.468 [2024-07-24 17:49:05.834644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:44.468 [2024-07-24 17:49:05.834660] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.468 [2024-07-24 17:49:05.834671] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.468 [2024-07-24 17:49:05.834753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.834983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.468 [2024-07-24 17:49:05.835312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.468 [2024-07-24 17:49:05.835322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.469 [2024-07-24 17:49:05.835863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.469 [2024-07-24 17:49:05.835873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.835881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.835895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.835903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.835912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aec30 is same with the state(5) to be set 00:23:44.470 [2024-07-24 17:49:05.837099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.470 [2024-07-24 17:49:05.837691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.470 [2024-07-24 17:49:05.837701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.837985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.837992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.838246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.838255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0210 is same with the state(5) to be set 00:23:44.471 [2024-07-24 17:49:05.839462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.839477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.839490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.839498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.839508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.839516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.839526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.839534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.471 [2024-07-24 17:49:05.839544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.471 [2024-07-24 17:49:05.839552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.839992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.472 [2024-07-24 17:49:05.840195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.472 [2024-07-24 17:49:05.840203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.473 [2024-07-24 17:49:05.840605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.473 [2024-07-24 17:49:05.840614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196d560 is same with the state(5) to be set 00:23:44.473 [2024-07-24 17:49:05.844171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:44.473 [2024-07-24 17:49:05.844194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.473 [2024-07-24 17:49:05.844201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.473 [2024-07-24 17:49:05.844207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:44.473 [2024-07-24 17:49:05.844732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.845223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.845236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762710 with addr=10.0.0.2, port=4420 00:23:44.473 [2024-07-24 17:49:05.845243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762710 is same with the state(5) to be set 00:23:44.473 [2024-07-24 17:49:05.845712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.846096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.846106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77e250 with addr=10.0.0.2, port=4420 00:23:44.473 [2024-07-24 17:49:05.846113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e250 is same with the state(5) to be set 00:23:44.473 [2024-07-24 17:49:05.846135] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.473 [2024-07-24 17:49:05.846157] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.473 [2024-07-24 17:49:05.846866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:44.473 [2024-07-24 17:49:05.846883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:44.473 [2024-07-24 17:49:05.847374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.847769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.847780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765470 with addr=10.0.0.2, port=4420 00:23:44.473 [2024-07-24 17:49:05.847791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765470 is same with the state(5) to be set 00:23:44.473 [2024-07-24 17:49:05.848208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.848618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.473 [2024-07-24 17:49:05.848629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x826a20 with addr=10.0.0.2, port=4420 00:23:44.473 [2024-07-24 17:49:05.848636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:23:44.473 [2024-07-24 17:49:05.848648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762710 (9): Bad file descriptor 00:23:44.473 [2024-07-24 17:49:05.848657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e250 (9): Bad file descriptor 00:23:44.473 [2024-07-24 17:49:05.848677] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.473 [2024-07-24 17:49:05.848688] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.473 [2024-07-24 17:49:05.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.474 [2024-07-24 17:49:05.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.474 [2024-07-24 17:49:05.849679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.849985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.849994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.475 [2024-07-24 17:49:05.850154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.475 [2024-07-24 17:49:05.850161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b16c0 is same with the state(5) to be set 00:23:44.475 [2024-07-24 17:49:05.851694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:44.475 task offset: 40320 on job bdev=Nvme3n1 fails 00:23:44.475 00:23:44.475 Latency(us) 00:23:44.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.475 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.475 Job: Nvme1n1 ended in about 0.77 seconds with error 00:23:44.475 Verification LBA range: start 0x0 length 0x400 00:23:44.475 Nvme1n1 : 0.77 378.88 23.68 83.33 0.00 137687.08 66105.88 130388.15 00:23:44.475 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme2n1 ended in about 0.76 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme2n1 : 0.76 455.50 28.47 84.50 0.00 116763.18 10485.76 102578.09 00:23:44.476 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme3n1 ended in about 0.75 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme3n1 : 0.75 385.71 24.11 84.83 0.00 132843.60 43538.70 135858.98 00:23:44.476 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme4n1 ended in about 0.77 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme4n1 : 0.77 430.42 26.90 31.21 0.00 133306.32 9972.87 113975.65 00:23:44.476 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme5n1 ended in about 0.76 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme5n1 : 0.76 385.06 24.07 84.69 0.00 130732.32 37384.01 127652.73 00:23:44.476 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme6n1 ended in about 0.77 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme6n1 : 0.77 376.64 23.54 82.83 0.00 132581.62 71120.81 113063.85 00:23:44.476 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme7n1 ended in about 0.77 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme7n1 : 0.77 381.96 23.87 82.59 0.00 130007.44 10941.66 108048.92 00:23:44.476 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme8n1 ended in about 0.79 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme8n1 : 0.79 369.86 23.12 81.34 0.00 132810.21 68385.39 114887.46 00:23:44.476 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme9n1 ended in about 0.77 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme9n1 : 0.77 375.26 23.45 83.10 0.00 129407.25 33964.74 123093.70 00:23:44.476 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.476 Job: Nvme10n1 ended in about 0.78 seconds with error 00:23:44.476 Verification LBA range: start 0x0 length 0x400 00:23:44.476 Nvme10n1 : 0.78 380.80 23.80 82.34 0.00 127043.83 7750.34 105769.41 00:23:44.476 =================================================================================================================== 00:23:44.476 Total : 3920.09 245.01 780.75 0.00 130111.17 7750.34 135858.98 00:23:44.476 [2024-07-24 17:49:05.877001] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:44.476 [2024-07-24 17:49:05.877040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:44.476 [2024-07-24 17:49:05.877522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.877915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.877926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x826120 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.877936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826120 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.878385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.878738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.878748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d9e0 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.878755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d9e0 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.878768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x765470 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.878780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x826a20 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.878788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.476 [2024-07-24 17:49:05.878795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.476 [2024-07-24 17:49:05.878804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.476 [2024-07-24 17:49:05.878819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:44.476 [2024-07-24 17:49:05.878825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:44.476 [2024-07-24 17:49:05.878831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:44.476 [2024-07-24 17:49:05.878941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:44.476 [2024-07-24 17:49:05.878954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:44.476 [2024-07-24 17:49:05.878972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.476 [2024-07-24 17:49:05.878979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.476 [2024-07-24 17:49:05.879436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.879839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.879850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x791240 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.879857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x791240 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.880320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.880663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.880672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82d950 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.880678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82d950 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.880687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x826120 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.880696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d9e0 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.880704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:44.476 [2024-07-24 17:49:05.880709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:44.476 [2024-07-24 17:49:05.880716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:44.476 [2024-07-24 17:49:05.880726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:44.476 [2024-07-24 17:49:05.880731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:44.476 [2024-07-24 17:49:05.880738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:44.476 [2024-07-24 17:49:05.880754] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.476 [2024-07-24 17:49:05.880765] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.476 [2024-07-24 17:49:05.880794] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.476 [2024-07-24 17:49:05.880804] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.476 [2024-07-24 17:49:05.881085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.476 [2024-07-24 17:49:05.881095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.476 [2024-07-24 17:49:05.881446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.881941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.881954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c6660 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.881962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c6660 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.882376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.882853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.476 [2024-07-24 17:49:05.882864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x783e40 with addr=10.0.0.2, port=4420 00:23:44.476 [2024-07-24 17:49:05.882873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783e40 is same with the state(5) to be set 00:23:44.476 [2024-07-24 17:49:05.882883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x791240 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.882896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82d950 (9): Bad file descriptor 00:23:44.476 [2024-07-24 17:49:05.882905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:44.476 [2024-07-24 17:49:05.882912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:44.476 [2024-07-24 17:49:05.882919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:44.476 [2024-07-24 17:49:05.882930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.882936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.882943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:44.477 [2024-07-24 17:49:05.882988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:44.477 [2024-07-24 17:49:05.883000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.477 [2024-07-24 17:49:05.883010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.883016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.883036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6660 (9): Bad file descriptor 00:23:44.477 [2024-07-24 17:49:05.883051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783e40 (9): Bad file descriptor 00:23:44.477 [2024-07-24 17:49:05.883060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.883066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.883073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:44.477 [2024-07-24 17:49:05.883082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.883089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.883097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:44.477 [2024-07-24 17:49:05.883141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.883150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.883634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.477 [2024-07-24 17:49:05.884051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.477 [2024-07-24 17:49:05.884065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77e250 with addr=10.0.0.2, port=4420 00:23:44.477 [2024-07-24 17:49:05.884072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e250 is same with the state(5) to be set 00:23:44.477 [2024-07-24 17:49:05.884511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.477 [2024-07-24 17:49:05.884903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.477 [2024-07-24 17:49:05.884915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762710 with addr=10.0.0.2, port=4420 00:23:44.477 [2024-07-24 17:49:05.884922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762710 is same with the state(5) to be set 00:23:44.477 [2024-07-24 17:49:05.884930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.884937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.884947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:44.477 [2024-07-24 17:49:05.884956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.884964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.884971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:44.477 [2024-07-24 17:49:05.884999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.885006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.885015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e250 (9): Bad file descriptor 00:23:44.477 [2024-07-24 17:49:05.885026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762710 (9): Bad file descriptor 00:23:44.477 [2024-07-24 17:49:05.885058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.885067] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.885073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:44.477 [2024-07-24 17:49:05.885082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.477 [2024-07-24 17:49:05.885089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.477 [2024-07-24 17:49:05.885096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.477 [2024-07-24 17:49:05.885124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.477 [2024-07-24 17:49:05.885131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.737 17:49:06 -- target/shutdown.sh@135 -- # nvmfpid= 00:23:44.737 17:49:06 -- target/shutdown.sh@138 -- # sleep 1 00:23:45.677 17:49:07 -- target/shutdown.sh@141 -- # kill -9 701666 00:23:45.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (701666) - No such process 00:23:45.677 17:49:07 -- target/shutdown.sh@141 -- # true 00:23:45.677 17:49:07 -- target/shutdown.sh@143 -- # stoptarget 00:23:45.677 17:49:07 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:45.677 17:49:07 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:45.677 17:49:07 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.677 17:49:07 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:45.677 17:49:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:45.677 17:49:07 -- nvmf/common.sh@116 -- # sync 00:23:45.677 17:49:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:45.677 17:49:07 -- nvmf/common.sh@119 -- # set +e 00:23:45.677 17:49:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:45.677 17:49:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:45.677 rmmod nvme_tcp 00:23:45.677 rmmod nvme_fabrics 00:23:45.938 rmmod nvme_keyring 00:23:45.938 17:49:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:45.938 17:49:07 -- nvmf/common.sh@123 -- # set -e 00:23:45.938 17:49:07 -- nvmf/common.sh@124 -- # return 0 00:23:45.938 17:49:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:45.938 17:49:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:45.938 17:49:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:45.938 17:49:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:45.938 17:49:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.938 17:49:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:45.938 17:49:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.938 17:49:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.938 17:49:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.845 17:49:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:47.845 00:23:47.845 real 0m7.589s 00:23:47.845 user 0m18.390s 00:23:47.845 sys 0m1.308s 00:23:47.845 17:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.845 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:47.845 ************************************ 00:23:47.845 END TEST nvmf_shutdown_tc3 00:23:47.845 ************************************ 00:23:47.845 17:49:09 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:23:47.845 00:23:47.845 real 0m31.098s 00:23:47.845 user 1m18.202s 00:23:47.845 sys 0m8.302s 00:23:47.845 17:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.845 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:47.845 ************************************ 00:23:47.845 END TEST nvmf_shutdown 00:23:47.845 ************************************ 00:23:48.103 17:49:09 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:48.103 17:49:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:48.103 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:48.103 17:49:09 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:48.104 17:49:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:48.104 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:48.104 17:49:09 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:48.104 17:49:09 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:48.104 17:49:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:48.104 17:49:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:48.104 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:48.104 ************************************ 00:23:48.104 START TEST nvmf_multicontroller 00:23:48.104 ************************************ 00:23:48.104 17:49:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:48.104 * Looking for test storage... 00:23:48.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.104 17:49:09 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.104 17:49:09 -- nvmf/common.sh@7 -- # uname -s 00:23:48.104 17:49:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.104 17:49:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.104 17:49:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.104 17:49:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.104 17:49:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.104 17:49:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.104 17:49:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.104 17:49:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.104 17:49:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.104 17:49:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.104 17:49:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.104 17:49:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.104 17:49:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.104 17:49:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.104 17:49:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.104 17:49:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.104 17:49:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.104 17:49:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.104 17:49:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.104 17:49:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.104 17:49:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.104 17:49:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.104 17:49:09 -- paths/export.sh@5 -- # export PATH 00:23:48.104 17:49:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.104 17:49:09 -- nvmf/common.sh@46 -- # : 0 00:23:48.104 17:49:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:48.104 17:49:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:48.104 17:49:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:48.104 17:49:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.104 17:49:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.104 17:49:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:48.104 17:49:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:48.104 17:49:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:48.104 17:49:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.104 17:49:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.104 17:49:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:48.104 17:49:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:48.104 17:49:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.104 17:49:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:48.104 17:49:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:48.104 17:49:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:48.104 17:49:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.104 17:49:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:48.104 17:49:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:48.104 17:49:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:48.104 17:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.104 17:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.104 17:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.104 17:49:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:48.104 17:49:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:48.104 17:49:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:48.104 17:49:09 -- common/autotest_common.sh@10 -- # set +x 00:23:53.400 17:49:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:53.400 17:49:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:53.400 17:49:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:53.400 17:49:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:53.400 17:49:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:53.400 17:49:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:53.400 17:49:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:53.400 17:49:14 -- nvmf/common.sh@294 -- # net_devs=() 00:23:53.400 17:49:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:53.400 17:49:14 -- nvmf/common.sh@295 -- # e810=() 00:23:53.400 17:49:14 -- nvmf/common.sh@295 -- # local -ga e810 00:23:53.400 17:49:14 -- nvmf/common.sh@296 -- # x722=() 00:23:53.400 17:49:14 -- nvmf/common.sh@296 -- # local -ga x722 00:23:53.400 17:49:14 -- nvmf/common.sh@297 -- # mlx=() 00:23:53.400 17:49:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:53.400 17:49:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.400 17:49:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:53.400 17:49:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.400 17:49:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:53.400 17:49:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.400 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.400 17:49:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:53.400 17:49:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.400 17:49:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.400 17:49:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.400 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.400 17:49:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:53.400 17:49:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.400 17:49:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.400 17:49:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.400 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.400 17:49:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:53.400 17:49:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:53.400 17:49:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.400 17:49:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.400 17:49:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:53.400 17:49:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.400 17:49:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.400 17:49:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:53.400 17:49:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.400 17:49:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.400 17:49:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:53.400 17:49:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:53.400 17:49:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.400 17:49:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.400 17:49:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.400 17:49:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.400 17:49:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:53.400 17:49:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.400 17:49:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.400 17:49:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.400 17:49:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:53.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:53.400 00:23:53.400 --- 10.0.0.2 ping statistics --- 00:23:53.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.400 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:53.400 17:49:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:23:53.400 00:23:53.400 --- 10.0.0.1 ping statistics --- 00:23:53.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.400 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:23:53.400 17:49:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.400 17:49:14 -- nvmf/common.sh@410 -- # return 0 00:23:53.400 17:49:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:53.400 17:49:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.400 17:49:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:53.400 17:49:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.400 17:49:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:53.400 17:49:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:53.400 17:49:14 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:53.400 17:49:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:53.400 17:49:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:53.400 17:49:14 -- common/autotest_common.sh@10 -- # set +x 00:23:53.659 17:49:15 -- nvmf/common.sh@469 -- # nvmfpid=705744 00:23:53.659 17:49:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:53.659 17:49:15 -- nvmf/common.sh@470 -- # waitforlisten 705744 00:23:53.659 17:49:15 -- common/autotest_common.sh@819 -- # '[' -z 705744 ']' 00:23:53.659 17:49:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.659 17:49:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:53.659 17:49:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.659 17:49:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:53.659 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:53.659 [2024-07-24 17:49:15.046750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:53.659 [2024-07-24 17:49:15.046794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.659 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.659 [2024-07-24 17:49:15.104858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:53.659 [2024-07-24 17:49:15.183988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:53.659 [2024-07-24 17:49:15.184098] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.659 [2024-07-24 17:49:15.184107] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.659 [2024-07-24 17:49:15.184113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.659 [2024-07-24 17:49:15.184149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.659 [2024-07-24 17:49:15.184169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.659 [2024-07-24 17:49:15.184171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.598 17:49:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:54.598 17:49:15 -- common/autotest_common.sh@852 -- # return 0 00:23:54.598 17:49:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:54.598 17:49:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.598 17:49:15 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 [2024-07-24 17:49:15.901287] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 Malloc0 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 [2024-07-24 17:49:15.961111] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 [2024-07-24 17:49:15.969073] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 Malloc1 00:23:54.598 17:49:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:15 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:54.598 17:49:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:15 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:16 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:54.598 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:16 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:54.598 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:16 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:54.598 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.598 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:54.598 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.598 17:49:16 -- host/multicontroller.sh@44 -- # bdevperf_pid=705996 00:23:54.598 17:49:16 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:54.598 17:49:16 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.598 17:49:16 -- host/multicontroller.sh@47 -- # waitforlisten 705996 /var/tmp/bdevperf.sock 00:23:54.598 17:49:16 -- common/autotest_common.sh@819 -- # '[' -z 705996 ']' 00:23:54.598 17:49:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.598 17:49:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:54.598 17:49:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.598 17:49:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:54.598 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 17:49:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:55.535 17:49:16 -- common/autotest_common.sh@852 -- # return 0 00:23:55.535 17:49:16 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:55.535 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 NVMe0n1 00:23:55.535 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.535 17:49:16 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.535 17:49:16 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:55.535 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 17:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.535 1 00:23:55.535 17:49:16 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.535 17:49:16 -- common/autotest_common.sh@640 -- # local es=0 00:23:55.535 17:49:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.535 17:49:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:55.535 17:49:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:55.535 17:49:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.535 17:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 request: 00:23:55.535 { 00:23:55.535 "name": "NVMe0", 00:23:55.535 "trtype": "tcp", 00:23:55.535 "traddr": "10.0.0.2", 00:23:55.535 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:55.535 "hostaddr": "10.0.0.2", 00:23:55.535 "hostsvcid": "60000", 00:23:55.535 "adrfam": "ipv4", 00:23:55.535 "trsvcid": "4420", 00:23:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.535 "method": "bdev_nvme_attach_controller", 00:23:55.535 "req_id": 1 00:23:55.535 } 00:23:55.535 Got JSON-RPC error response 00:23:55.535 response: 00:23:55.535 { 00:23:55.535 "code": -114, 00:23:55.535 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.535 } 00:23:55.535 17:49:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # es=1 00:23:55.535 17:49:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:55.535 17:49:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:55.535 17:49:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.535 17:49:17 -- common/autotest_common.sh@640 -- # local es=0 00:23:55.535 17:49:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.535 17:49:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.535 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 request: 00:23:55.535 { 00:23:55.535 "name": "NVMe0", 00:23:55.535 "trtype": "tcp", 00:23:55.535 "traddr": "10.0.0.2", 00:23:55.535 "hostaddr": "10.0.0.2", 00:23:55.535 "hostsvcid": "60000", 00:23:55.535 "adrfam": "ipv4", 00:23:55.535 "trsvcid": "4420", 00:23:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.535 "method": "bdev_nvme_attach_controller", 00:23:55.535 "req_id": 1 00:23:55.535 } 00:23:55.535 Got JSON-RPC error response 00:23:55.535 response: 00:23:55.535 { 00:23:55.535 "code": -114, 00:23:55.535 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.535 } 00:23:55.535 17:49:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # es=1 00:23:55.535 17:49:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:55.535 17:49:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:55.535 17:49:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@640 -- # local es=0 00:23:55.535 17:49:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 request: 00:23:55.535 { 00:23:55.535 "name": "NVMe0", 00:23:55.535 "trtype": "tcp", 00:23:55.535 "traddr": "10.0.0.2", 00:23:55.535 "hostaddr": "10.0.0.2", 00:23:55.535 "hostsvcid": "60000", 00:23:55.535 "adrfam": "ipv4", 00:23:55.535 "trsvcid": "4420", 00:23:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.535 "multipath": "disable", 00:23:55.535 "method": "bdev_nvme_attach_controller", 00:23:55.535 "req_id": 1 00:23:55.535 } 00:23:55.535 Got JSON-RPC error response 00:23:55.535 response: 00:23:55.535 { 00:23:55.535 "code": -114, 00:23:55.535 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:55.535 } 00:23:55.535 17:49:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # es=1 00:23:55.535 17:49:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:55.535 17:49:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:55.535 17:49:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:55.535 17:49:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.535 17:49:17 -- common/autotest_common.sh@640 -- # local es=0 00:23:55.535 17:49:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.535 17:49:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:55.535 17:49:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.535 17:49:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.535 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.535 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 request: 00:23:55.535 { 00:23:55.535 "name": "NVMe0", 00:23:55.535 "trtype": "tcp", 00:23:55.535 "traddr": "10.0.0.2", 00:23:55.535 "hostaddr": "10.0.0.2", 00:23:55.535 "hostsvcid": "60000", 00:23:55.535 "adrfam": "ipv4", 00:23:55.535 "trsvcid": "4420", 00:23:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.535 "multipath": "failover", 00:23:55.535 "method": "bdev_nvme_attach_controller", 00:23:55.535 "req_id": 1 00:23:55.535 } 00:23:55.535 Got JSON-RPC error response 00:23:55.535 response: 00:23:55.535 { 00:23:55.535 "code": -114, 00:23:55.535 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.535 } 00:23:55.536 17:49:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:55.536 17:49:17 -- common/autotest_common.sh@643 -- # es=1 00:23:55.536 17:49:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:55.536 17:49:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:55.536 17:49:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:55.536 17:49:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.536 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.536 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.795 00:23:55.795 17:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.795 17:49:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.795 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.795 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.795 17:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.795 17:49:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:55.795 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.795 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.795 00:23:55.795 17:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.795 17:49:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.795 17:49:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:55.796 17:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:55.796 17:49:17 -- common/autotest_common.sh@10 -- # set +x 00:23:55.796 17:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:55.796 17:49:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:55.796 17:49:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.175 0 00:23:57.175 17:49:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:57.175 17:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.175 17:49:18 -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 17:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.175 17:49:18 -- host/multicontroller.sh@100 -- # killprocess 705996 00:23:57.175 17:49:18 -- common/autotest_common.sh@926 -- # '[' -z 705996 ']' 00:23:57.175 17:49:18 -- common/autotest_common.sh@930 -- # kill -0 705996 00:23:57.175 17:49:18 -- common/autotest_common.sh@931 -- # uname 00:23:57.175 17:49:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.175 17:49:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 705996 00:23:57.175 17:49:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:57.175 17:49:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:57.175 17:49:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 705996' 00:23:57.175 killing process with pid 705996 00:23:57.175 17:49:18 -- common/autotest_common.sh@945 -- # kill 705996 00:23:57.175 17:49:18 -- common/autotest_common.sh@950 -- # wait 705996 00:23:57.175 17:49:18 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.175 17:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.175 17:49:18 -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 17:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.175 17:49:18 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:57.175 17:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.175 17:49:18 -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 17:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.175 17:49:18 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:57.175 17:49:18 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.175 17:49:18 -- common/autotest_common.sh@1597 -- # read -r file 00:23:57.175 17:49:18 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:57.175 17:49:18 -- common/autotest_common.sh@1596 -- # sort -u 00:23:57.175 17:49:18 -- common/autotest_common.sh@1598 -- # cat 00:23:57.175 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:57.175 [2024-07-24 17:49:16.066100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:57.175 [2024-07-24 17:49:16.066147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705996 ] 00:23:57.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.175 [2024-07-24 17:49:16.120490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.175 [2024-07-24 17:49:16.192352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.175 [2024-07-24 17:49:17.295973] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 6959ce5a-af0a-4496-9b13-59e4baf6c51f already exists 00:23:57.175 [2024-07-24 17:49:17.296000] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:6959ce5a-af0a-4496-9b13-59e4baf6c51f alias for bdev NVMe1n1 00:23:57.175 [2024-07-24 17:49:17.296010] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:57.175 Running I/O for 1 seconds... 00:23:57.175 00:23:57.175 Latency(us) 00:23:57.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.175 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:57.175 NVMe0n1 : 1.01 23181.07 90.55 0.00 0.00 5504.72 3205.57 30317.52 00:23:57.175 =================================================================================================================== 00:23:57.175 Total : 23181.07 90.55 0.00 0.00 5504.72 3205.57 30317.52 00:23:57.175 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.175 00:23:57.175 Latency(us) 00:23:57.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.175 =================================================================================================================== 00:23:57.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.175 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:57.175 17:49:18 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.175 17:49:18 -- common/autotest_common.sh@1597 -- # read -r file 00:23:57.175 17:49:18 -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:57.175 17:49:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:57.175 17:49:18 -- nvmf/common.sh@116 -- # sync 00:23:57.175 17:49:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:57.175 17:49:18 -- nvmf/common.sh@119 -- # set +e 00:23:57.175 17:49:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:57.175 17:49:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:57.175 rmmod nvme_tcp 00:23:57.175 rmmod nvme_fabrics 00:23:57.435 rmmod nvme_keyring 00:23:57.435 17:49:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:57.435 17:49:18 -- nvmf/common.sh@123 -- # set -e 00:23:57.435 17:49:18 -- nvmf/common.sh@124 -- # return 0 00:23:57.435 17:49:18 -- nvmf/common.sh@477 -- # '[' -n 705744 ']' 00:23:57.435 17:49:18 -- nvmf/common.sh@478 -- # killprocess 705744 00:23:57.435 17:49:18 -- common/autotest_common.sh@926 -- # '[' -z 705744 ']' 00:23:57.435 17:49:18 -- common/autotest_common.sh@930 -- # kill -0 705744 00:23:57.435 17:49:18 -- common/autotest_common.sh@931 -- # uname 00:23:57.435 17:49:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.435 17:49:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 705744 00:23:57.435 17:49:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:57.435 17:49:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:57.435 17:49:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 705744' 00:23:57.435 killing process with pid 705744 00:23:57.435 17:49:18 -- common/autotest_common.sh@945 -- # kill 705744 00:23:57.435 17:49:18 -- common/autotest_common.sh@950 -- # wait 705744 00:23:57.697 17:49:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:57.697 17:49:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:57.697 17:49:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:57.697 17:49:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.697 17:49:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:57.697 17:49:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.697 17:49:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.697 17:49:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.605 17:49:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:59.605 00:23:59.605 real 0m11.661s 00:23:59.605 user 0m16.069s 00:23:59.605 sys 0m4.792s 00:23:59.605 17:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.605 17:49:21 -- common/autotest_common.sh@10 -- # set +x 00:23:59.605 ************************************ 00:23:59.605 END TEST nvmf_multicontroller 00:23:59.605 ************************************ 00:23:59.865 17:49:21 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:59.865 17:49:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:59.865 17:49:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:59.865 17:49:21 -- common/autotest_common.sh@10 -- # set +x 00:23:59.865 ************************************ 00:23:59.865 START TEST nvmf_aer 00:23:59.865 ************************************ 00:23:59.865 17:49:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:59.865 * Looking for test storage... 00:23:59.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.865 17:49:21 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.865 17:49:21 -- nvmf/common.sh@7 -- # uname -s 00:23:59.865 17:49:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.865 17:49:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.865 17:49:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.865 17:49:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.865 17:49:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.865 17:49:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.865 17:49:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.865 17:49:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.865 17:49:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.865 17:49:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.865 17:49:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:59.865 17:49:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:59.865 17:49:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.865 17:49:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.865 17:49:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.865 17:49:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.865 17:49:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.865 17:49:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.865 17:49:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.865 17:49:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.865 17:49:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.865 17:49:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.865 17:49:21 -- paths/export.sh@5 -- # export PATH 00:23:59.865 17:49:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.865 17:49:21 -- nvmf/common.sh@46 -- # : 0 00:23:59.865 17:49:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:59.865 17:49:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:59.865 17:49:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:59.865 17:49:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.865 17:49:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.865 17:49:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:59.865 17:49:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:59.865 17:49:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:59.865 17:49:21 -- host/aer.sh@11 -- # nvmftestinit 00:23:59.865 17:49:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:59.865 17:49:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.865 17:49:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:59.865 17:49:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:59.865 17:49:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:59.865 17:49:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.865 17:49:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.865 17:49:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.865 17:49:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:59.865 17:49:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:59.865 17:49:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:59.865 17:49:21 -- common/autotest_common.sh@10 -- # set +x 00:24:05.142 17:49:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.142 17:49:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:05.142 17:49:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:05.142 17:49:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:05.142 17:49:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:05.142 17:49:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:05.142 17:49:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:05.142 17:49:26 -- nvmf/common.sh@294 -- # net_devs=() 00:24:05.142 17:49:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:05.142 17:49:26 -- nvmf/common.sh@295 -- # e810=() 00:24:05.142 17:49:26 -- nvmf/common.sh@295 -- # local -ga e810 00:24:05.142 17:49:26 -- nvmf/common.sh@296 -- # x722=() 00:24:05.142 17:49:26 -- nvmf/common.sh@296 -- # local -ga x722 00:24:05.142 17:49:26 -- nvmf/common.sh@297 -- # mlx=() 00:24:05.142 17:49:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:05.142 17:49:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.142 17:49:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:05.142 17:49:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:05.142 17:49:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:05.142 17:49:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:05.142 17:49:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:05.142 17:49:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:05.142 17:49:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.143 17:49:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.143 17:49:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.143 17:49:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.143 17:49:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:05.143 17:49:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.143 17:49:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.143 17:49:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.143 17:49:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.143 17:49:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.143 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.143 17:49:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.143 17:49:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.143 17:49:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.143 17:49:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.143 17:49:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.143 17:49:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.143 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.143 17:49:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.143 17:49:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:05.143 17:49:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:05.143 17:49:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:05.143 17:49:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.143 17:49:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.143 17:49:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.143 17:49:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:05.143 17:49:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.143 17:49:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.143 17:49:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:05.143 17:49:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.143 17:49:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.143 17:49:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:05.143 17:49:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:05.143 17:49:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.143 17:49:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.143 17:49:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.143 17:49:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.143 17:49:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:05.143 17:49:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.143 17:49:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.143 17:49:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.143 17:49:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:05.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:05.143 00:24:05.143 --- 10.0.0.2 ping statistics --- 00:24:05.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.143 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:05.143 17:49:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:24:05.143 00:24:05.143 --- 10.0.0.1 ping statistics --- 00:24:05.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.143 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:24:05.143 17:49:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.143 17:49:26 -- nvmf/common.sh@410 -- # return 0 00:24:05.143 17:49:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:05.143 17:49:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.143 17:49:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:05.143 17:49:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.143 17:49:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:05.143 17:49:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:05.143 17:49:26 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:05.143 17:49:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:05.143 17:49:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:05.143 17:49:26 -- common/autotest_common.sh@10 -- # set +x 00:24:05.143 17:49:26 -- nvmf/common.sh@469 -- # nvmfpid=709789 00:24:05.143 17:49:26 -- nvmf/common.sh@470 -- # waitforlisten 709789 00:24:05.143 17:49:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.143 17:49:26 -- common/autotest_common.sh@819 -- # '[' -z 709789 ']' 00:24:05.143 17:49:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.143 17:49:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:05.143 17:49:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.143 17:49:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:05.143 17:49:26 -- common/autotest_common.sh@10 -- # set +x 00:24:05.403 [2024-07-24 17:49:26.772627] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:05.403 [2024-07-24 17:49:26.772671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.403 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.403 [2024-07-24 17:49:26.829163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.403 [2024-07-24 17:49:26.908594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.403 [2024-07-24 17:49:26.908702] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.403 [2024-07-24 17:49:26.908710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.403 [2024-07-24 17:49:26.908720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.403 [2024-07-24 17:49:26.908760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.403 [2024-07-24 17:49:26.908776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.403 [2024-07-24 17:49:26.908867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.403 [2024-07-24 17:49:26.908869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.342 17:49:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:06.342 17:49:27 -- common/autotest_common.sh@852 -- # return 0 00:24:06.342 17:49:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.342 17:49:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 17:49:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.342 17:49:27 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 [2024-07-24 17:49:27.618361] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 Malloc0 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 [2024-07-24 17:49:27.670220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:06.342 17:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.342 17:49:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.342 [2024-07-24 17:49:27.678025] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:06.342 [ 00:24:06.342 { 00:24:06.342 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:06.342 "subtype": "Discovery", 00:24:06.342 "listen_addresses": [], 00:24:06.342 "allow_any_host": true, 00:24:06.342 "hosts": [] 00:24:06.342 }, 00:24:06.342 { 00:24:06.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.342 "subtype": "NVMe", 00:24:06.342 "listen_addresses": [ 00:24:06.342 { 00:24:06.342 "transport": "TCP", 00:24:06.342 "trtype": "TCP", 00:24:06.342 "adrfam": "IPv4", 00:24:06.342 "traddr": "10.0.0.2", 00:24:06.342 "trsvcid": "4420" 00:24:06.342 } 00:24:06.342 ], 00:24:06.342 "allow_any_host": true, 00:24:06.342 "hosts": [], 00:24:06.342 "serial_number": "SPDK00000000000001", 00:24:06.342 "model_number": "SPDK bdev Controller", 00:24:06.342 "max_namespaces": 2, 00:24:06.342 "min_cntlid": 1, 00:24:06.342 "max_cntlid": 65519, 00:24:06.342 "namespaces": [ 00:24:06.342 { 00:24:06.342 "nsid": 1, 00:24:06.342 "bdev_name": "Malloc0", 00:24:06.342 "name": "Malloc0", 00:24:06.342 "nguid": "7A36F66923F34C55961E1FCAD03A943A", 00:24:06.342 "uuid": "7a36f669-23f3-4c55-961e-1fcad03a943a" 00:24:06.342 } 00:24:06.342 ] 00:24:06.342 } 00:24:06.342 ] 00:24:06.342 17:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.342 17:49:27 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:06.342 17:49:27 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:06.342 17:49:27 -- host/aer.sh@33 -- # aerpid=710045 00:24:06.342 17:49:27 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:06.342 17:49:27 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:06.342 17:49:27 -- common/autotest_common.sh@1244 -- # local i=0 00:24:06.342 17:49:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1247 -- # i=1 00:24:06.342 17:49:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:06.342 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.342 17:49:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1247 -- # i=2 00:24:06.342 17:49:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:06.342 17:49:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:24:06.342 17:49:27 -- common/autotest_common.sh@1247 -- # i=3 00:24:06.342 17:49:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:06.602 17:49:28 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:06.602 17:49:28 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:06.602 17:49:28 -- common/autotest_common.sh@1255 -- # return 0 00:24:06.602 17:49:28 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 Malloc1 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 [ 00:24:06.602 { 00:24:06.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:06.602 "subtype": "Discovery", 00:24:06.602 "listen_addresses": [], 00:24:06.602 "allow_any_host": true, 00:24:06.602 "hosts": [] 00:24:06.602 }, 00:24:06.602 { 00:24:06.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.602 "subtype": "NVMe", 00:24:06.602 "listen_addresses": [ 00:24:06.602 { 00:24:06.602 "transport": "TCP", 00:24:06.602 "trtype": "TCP", 00:24:06.602 "adrfam": "IPv4", 00:24:06.602 "traddr": "10.0.0.2", 00:24:06.602 "trsvcid": "4420" 00:24:06.602 } 00:24:06.602 ], 00:24:06.602 "allow_any_host": true, 00:24:06.602 "hosts": [], 00:24:06.602 "serial_number": "SPDK00000000000001", 00:24:06.602 "model_number": "SPDK bdev Controller", 00:24:06.602 "max_namespaces": 2, 00:24:06.602 "min_cntlid": 1, 00:24:06.602 "max_cntlid": 65519, 00:24:06.602 "namespaces": [ 00:24:06.602 { 00:24:06.602 "nsid": 1, 00:24:06.602 "bdev_name": "Malloc0", 00:24:06.602 "name": "Malloc0", 00:24:06.602 "nguid": "7A36F66923F34C55961E1FCAD03A943A", 00:24:06.602 "uuid": "7a36f669-23f3-4c55-961e-1fcad03a943a" 00:24:06.602 }, 00:24:06.602 { 00:24:06.602 "nsid": 2, 00:24:06.602 "bdev_name": "Malloc1", 00:24:06.602 "name": "Malloc1", 00:24:06.602 Asynchronous Event Request test 00:24:06.602 Attaching to 10.0.0.2 00:24:06.602 Attached to 10.0.0.2 00:24:06.602 Registering asynchronous event callbacks... 00:24:06.602 Starting namespace attribute notice tests for all controllers... 00:24:06.602 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:06.602 aer_cb - Changed Namespace 00:24:06.602 Cleaning up... 00:24:06.602 "nguid": "EC31E5F1CFF345EEAC31C3AB5F430C84", 00:24:06.602 "uuid": "ec31e5f1-cff3-45ee-ac31-c3ab5f430c84" 00:24:06.602 } 00:24:06.602 ] 00:24:06.602 } 00:24:06.602 ] 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@43 -- # wait 710045 00:24:06.602 17:49:28 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.602 17:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.602 17:49:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.602 17:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.602 17:49:28 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:06.602 17:49:28 -- host/aer.sh@51 -- # nvmftestfini 00:24:06.602 17:49:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:06.602 17:49:28 -- nvmf/common.sh@116 -- # sync 00:24:06.602 17:49:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:06.602 17:49:28 -- nvmf/common.sh@119 -- # set +e 00:24:06.602 17:49:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:06.602 17:49:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:06.602 rmmod nvme_tcp 00:24:06.602 rmmod nvme_fabrics 00:24:06.602 rmmod nvme_keyring 00:24:06.602 17:49:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:06.603 17:49:28 -- nvmf/common.sh@123 -- # set -e 00:24:06.603 17:49:28 -- nvmf/common.sh@124 -- # return 0 00:24:06.603 17:49:28 -- nvmf/common.sh@477 -- # '[' -n 709789 ']' 00:24:06.603 17:49:28 -- nvmf/common.sh@478 -- # killprocess 709789 00:24:06.603 17:49:28 -- common/autotest_common.sh@926 -- # '[' -z 709789 ']' 00:24:06.603 17:49:28 -- common/autotest_common.sh@930 -- # kill -0 709789 00:24:06.603 17:49:28 -- common/autotest_common.sh@931 -- # uname 00:24:06.603 17:49:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:06.603 17:49:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 709789 00:24:06.862 17:49:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:06.862 17:49:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:06.862 17:49:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 709789' 00:24:06.862 killing process with pid 709789 00:24:06.862 17:49:28 -- common/autotest_common.sh@945 -- # kill 709789 00:24:06.862 [2024-07-24 17:49:28.222275] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:06.862 17:49:28 -- common/autotest_common.sh@950 -- # wait 709789 00:24:06.862 17:49:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:06.862 17:49:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:06.862 17:49:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:06.862 17:49:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.862 17:49:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:06.862 17:49:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.862 17:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.862 17:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.402 17:49:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:09.402 00:24:09.402 real 0m9.282s 00:24:09.402 user 0m7.502s 00:24:09.402 sys 0m4.535s 00:24:09.402 17:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.402 17:49:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.402 ************************************ 00:24:09.402 END TEST nvmf_aer 00:24:09.402 ************************************ 00:24:09.402 17:49:30 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:09.402 17:49:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:09.402 17:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.402 17:49:30 -- common/autotest_common.sh@10 -- # set +x 00:24:09.402 ************************************ 00:24:09.402 START TEST nvmf_async_init 00:24:09.402 ************************************ 00:24:09.402 17:49:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:09.402 * Looking for test storage... 00:24:09.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.402 17:49:30 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.402 17:49:30 -- nvmf/common.sh@7 -- # uname -s 00:24:09.402 17:49:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.402 17:49:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.402 17:49:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.402 17:49:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.402 17:49:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.402 17:49:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.402 17:49:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.402 17:49:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.402 17:49:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.402 17:49:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.402 17:49:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.402 17:49:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.402 17:49:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.402 17:49:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.402 17:49:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.402 17:49:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.402 17:49:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.402 17:49:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.402 17:49:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.402 17:49:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.402 17:49:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.402 17:49:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.402 17:49:30 -- paths/export.sh@5 -- # export PATH 00:24:09.402 17:49:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.402 17:49:30 -- nvmf/common.sh@46 -- # : 0 00:24:09.402 17:49:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.402 17:49:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.402 17:49:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.403 17:49:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.403 17:49:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.403 17:49:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.403 17:49:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.403 17:49:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.403 17:49:30 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:09.403 17:49:30 -- host/async_init.sh@14 -- # null_block_size=512 00:24:09.403 17:49:30 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:09.403 17:49:30 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:09.403 17:49:30 -- host/async_init.sh@20 -- # uuidgen 00:24:09.403 17:49:30 -- host/async_init.sh@20 -- # tr -d - 00:24:09.403 17:49:30 -- host/async_init.sh@20 -- # nguid=db60373228094b3bafec947b3ec7814b 00:24:09.403 17:49:30 -- host/async_init.sh@22 -- # nvmftestinit 00:24:09.403 17:49:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:09.403 17:49:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.403 17:49:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:09.403 17:49:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:09.403 17:49:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:09.403 17:49:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.403 17:49:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.403 17:49:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.403 17:49:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:09.403 17:49:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:09.403 17:49:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:09.403 17:49:30 -- common/autotest_common.sh@10 -- # set +x 00:24:14.702 17:49:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:14.702 17:49:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:14.702 17:49:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:14.702 17:49:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:14.702 17:49:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:14.702 17:49:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:14.702 17:49:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:14.702 17:49:35 -- nvmf/common.sh@294 -- # net_devs=() 00:24:14.702 17:49:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:14.702 17:49:35 -- nvmf/common.sh@295 -- # e810=() 00:24:14.702 17:49:35 -- nvmf/common.sh@295 -- # local -ga e810 00:24:14.702 17:49:35 -- nvmf/common.sh@296 -- # x722=() 00:24:14.702 17:49:35 -- nvmf/common.sh@296 -- # local -ga x722 00:24:14.702 17:49:35 -- nvmf/common.sh@297 -- # mlx=() 00:24:14.702 17:49:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:14.702 17:49:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.702 17:49:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:14.702 17:49:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:14.702 17:49:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:14.702 17:49:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:14.702 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:14.702 17:49:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:14.702 17:49:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:14.702 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:14.702 17:49:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:14.702 17:49:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.702 17:49:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.702 17:49:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:14.702 Found net devices under 0000:86:00.0: cvl_0_0 00:24:14.702 17:49:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.702 17:49:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:14.702 17:49:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.702 17:49:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.702 17:49:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:14.702 Found net devices under 0000:86:00.1: cvl_0_1 00:24:14.702 17:49:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.702 17:49:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:14.702 17:49:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:14.702 17:49:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:14.702 17:49:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.702 17:49:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.702 17:49:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.702 17:49:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:14.702 17:49:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.702 17:49:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.702 17:49:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:14.702 17:49:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.702 17:49:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.702 17:49:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:14.702 17:49:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:14.702 17:49:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.702 17:49:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.702 17:49:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.702 17:49:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.702 17:49:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:14.702 17:49:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.702 17:49:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.702 17:49:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.702 17:49:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:14.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:14.702 00:24:14.702 --- 10.0.0.2 ping statistics --- 00:24:14.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.703 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:14.703 17:49:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:24:14.703 00:24:14.703 --- 10.0.0.1 ping statistics --- 00:24:14.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.703 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:14.703 17:49:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.703 17:49:36 -- nvmf/common.sh@410 -- # return 0 00:24:14.703 17:49:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:14.703 17:49:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.703 17:49:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:14.703 17:49:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:14.703 17:49:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.703 17:49:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:14.703 17:49:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:14.703 17:49:36 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:14.703 17:49:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:14.703 17:49:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:14.703 17:49:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.703 17:49:36 -- nvmf/common.sh@469 -- # nvmfpid=713577 00:24:14.703 17:49:36 -- nvmf/common.sh@470 -- # waitforlisten 713577 00:24:14.703 17:49:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:14.703 17:49:36 -- common/autotest_common.sh@819 -- # '[' -z 713577 ']' 00:24:14.703 17:49:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.703 17:49:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:14.703 17:49:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.703 17:49:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:14.703 17:49:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.703 [2024-07-24 17:49:36.271619] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:14.703 [2024-07-24 17:49:36.271658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.970 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.970 [2024-07-24 17:49:36.328337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.970 [2024-07-24 17:49:36.406789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:14.970 [2024-07-24 17:49:36.406895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.970 [2024-07-24 17:49:36.406902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.970 [2024-07-24 17:49:36.406909] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.970 [2024-07-24 17:49:36.406927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.537 17:49:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:15.537 17:49:37 -- common/autotest_common.sh@852 -- # return 0 00:24:15.537 17:49:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:15.537 17:49:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:15.537 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.537 17:49:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.537 17:49:37 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:15.537 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.537 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.537 [2024-07-24 17:49:37.109091] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.537 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.537 17:49:37 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:15.537 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.537 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.537 null0 00:24:15.537 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.537 17:49:37 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:15.537 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.537 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.537 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.537 17:49:37 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:15.537 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.537 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.796 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.796 17:49:37 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g db60373228094b3bafec947b3ec7814b 00:24:15.796 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.796 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.796 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.796 17:49:37 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:15.796 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.796 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.796 [2024-07-24 17:49:37.149291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.796 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.796 17:49:37 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:15.796 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.796 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.796 nvme0n1 00:24:15.796 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.796 17:49:37 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.796 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.796 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.055 [ 00:24:16.055 { 00:24:16.055 "name": "nvme0n1", 00:24:16.055 "aliases": [ 00:24:16.055 "db603732-2809-4b3b-afec-947b3ec7814b" 00:24:16.055 ], 00:24:16.055 "product_name": "NVMe disk", 00:24:16.055 "block_size": 512, 00:24:16.055 "num_blocks": 2097152, 00:24:16.055 "uuid": "db603732-2809-4b3b-afec-947b3ec7814b", 00:24:16.055 "assigned_rate_limits": { 00:24:16.055 "rw_ios_per_sec": 0, 00:24:16.055 "rw_mbytes_per_sec": 0, 00:24:16.055 "r_mbytes_per_sec": 0, 00:24:16.055 "w_mbytes_per_sec": 0 00:24:16.055 }, 00:24:16.055 "claimed": false, 00:24:16.055 "zoned": false, 00:24:16.055 "supported_io_types": { 00:24:16.055 "read": true, 00:24:16.055 "write": true, 00:24:16.055 "unmap": false, 00:24:16.055 "write_zeroes": true, 00:24:16.055 "flush": true, 00:24:16.055 "reset": true, 00:24:16.055 "compare": true, 00:24:16.055 "compare_and_write": true, 00:24:16.055 "abort": true, 00:24:16.055 "nvme_admin": true, 00:24:16.055 "nvme_io": true 00:24:16.055 }, 00:24:16.055 "driver_specific": { 00:24:16.055 "nvme": [ 00:24:16.055 { 00:24:16.055 "trid": { 00:24:16.055 "trtype": "TCP", 00:24:16.055 "adrfam": "IPv4", 00:24:16.055 "traddr": "10.0.0.2", 00:24:16.055 "trsvcid": "4420", 00:24:16.055 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.055 }, 00:24:16.055 "ctrlr_data": { 00:24:16.055 "cntlid": 1, 00:24:16.055 "vendor_id": "0x8086", 00:24:16.055 "model_number": "SPDK bdev Controller", 00:24:16.055 "serial_number": "00000000000000000000", 00:24:16.055 "firmware_revision": "24.01.1", 00:24:16.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.055 "oacs": { 00:24:16.055 "security": 0, 00:24:16.055 "format": 0, 00:24:16.055 "firmware": 0, 00:24:16.055 "ns_manage": 0 00:24:16.055 }, 00:24:16.055 "multi_ctrlr": true, 00:24:16.055 "ana_reporting": false 00:24:16.055 }, 00:24:16.055 "vs": { 00:24:16.055 "nvme_version": "1.3" 00:24:16.055 }, 00:24:16.055 "ns_data": { 00:24:16.055 "id": 1, 00:24:16.055 "can_share": true 00:24:16.055 } 00:24:16.055 } 00:24:16.055 ], 00:24:16.055 "mp_policy": "active_passive" 00:24:16.055 } 00:24:16.055 } 00:24:16.055 ] 00:24:16.055 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.055 17:49:37 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:16.055 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.055 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.055 [2024-07-24 17:49:37.409864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.055 [2024-07-24 17:49:37.409920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23192b0 (9): Bad file descriptor 00:24:16.055 [2024-07-24 17:49:37.542117] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:16.055 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.055 17:49:37 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.055 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.055 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.055 [ 00:24:16.055 { 00:24:16.055 "name": "nvme0n1", 00:24:16.055 "aliases": [ 00:24:16.055 "db603732-2809-4b3b-afec-947b3ec7814b" 00:24:16.055 ], 00:24:16.055 "product_name": "NVMe disk", 00:24:16.055 "block_size": 512, 00:24:16.055 "num_blocks": 2097152, 00:24:16.055 "uuid": "db603732-2809-4b3b-afec-947b3ec7814b", 00:24:16.055 "assigned_rate_limits": { 00:24:16.056 "rw_ios_per_sec": 0, 00:24:16.056 "rw_mbytes_per_sec": 0, 00:24:16.056 "r_mbytes_per_sec": 0, 00:24:16.056 "w_mbytes_per_sec": 0 00:24:16.056 }, 00:24:16.056 "claimed": false, 00:24:16.056 "zoned": false, 00:24:16.056 "supported_io_types": { 00:24:16.056 "read": true, 00:24:16.056 "write": true, 00:24:16.056 "unmap": false, 00:24:16.056 "write_zeroes": true, 00:24:16.056 "flush": true, 00:24:16.056 "reset": true, 00:24:16.056 "compare": true, 00:24:16.056 "compare_and_write": true, 00:24:16.056 "abort": true, 00:24:16.056 "nvme_admin": true, 00:24:16.056 "nvme_io": true 00:24:16.056 }, 00:24:16.056 "driver_specific": { 00:24:16.056 "nvme": [ 00:24:16.056 { 00:24:16.056 "trid": { 00:24:16.056 "trtype": "TCP", 00:24:16.056 "adrfam": "IPv4", 00:24:16.056 "traddr": "10.0.0.2", 00:24:16.056 "trsvcid": "4420", 00:24:16.056 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.056 }, 00:24:16.056 "ctrlr_data": { 00:24:16.056 "cntlid": 2, 00:24:16.056 "vendor_id": "0x8086", 00:24:16.056 "model_number": "SPDK bdev Controller", 00:24:16.056 "serial_number": "00000000000000000000", 00:24:16.056 "firmware_revision": "24.01.1", 00:24:16.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.056 "oacs": { 00:24:16.056 "security": 0, 00:24:16.056 "format": 0, 00:24:16.056 "firmware": 0, 00:24:16.056 "ns_manage": 0 00:24:16.056 }, 00:24:16.056 "multi_ctrlr": true, 00:24:16.056 "ana_reporting": false 00:24:16.056 }, 00:24:16.056 "vs": { 00:24:16.056 "nvme_version": "1.3" 00:24:16.056 }, 00:24:16.056 "ns_data": { 00:24:16.056 "id": 1, 00:24:16.056 "can_share": true 00:24:16.056 } 00:24:16.056 } 00:24:16.056 ], 00:24:16.056 "mp_policy": "active_passive" 00:24:16.056 } 00:24:16.056 } 00:24:16.056 ] 00:24:16.056 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.056 17:49:37 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.056 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.056 17:49:37 -- host/async_init.sh@53 -- # mktemp 00:24:16.056 17:49:37 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.N1nBPhkFG3 00:24:16.056 17:49:37 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:16.056 17:49:37 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.N1nBPhkFG3 00:24:16.056 17:49:37 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.056 17:49:37 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:16.056 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 [2024-07-24 17:49:37.598430] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.056 [2024-07-24 17:49:37.598518] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.056 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.056 17:49:37 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N1nBPhkFG3 00:24:16.056 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.056 17:49:37 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N1nBPhkFG3 00:24:16.056 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.056 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 [2024-07-24 17:49:37.618487] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.314 nvme0n1 00:24:16.314 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.314 17:49:37 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.314 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.314 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.314 [ 00:24:16.314 { 00:24:16.314 "name": "nvme0n1", 00:24:16.314 "aliases": [ 00:24:16.314 "db603732-2809-4b3b-afec-947b3ec7814b" 00:24:16.314 ], 00:24:16.314 "product_name": "NVMe disk", 00:24:16.314 "block_size": 512, 00:24:16.315 "num_blocks": 2097152, 00:24:16.315 "uuid": "db603732-2809-4b3b-afec-947b3ec7814b", 00:24:16.315 "assigned_rate_limits": { 00:24:16.315 "rw_ios_per_sec": 0, 00:24:16.315 "rw_mbytes_per_sec": 0, 00:24:16.315 "r_mbytes_per_sec": 0, 00:24:16.315 "w_mbytes_per_sec": 0 00:24:16.315 }, 00:24:16.315 "claimed": false, 00:24:16.315 "zoned": false, 00:24:16.315 "supported_io_types": { 00:24:16.315 "read": true, 00:24:16.315 "write": true, 00:24:16.315 "unmap": false, 00:24:16.315 "write_zeroes": true, 00:24:16.315 "flush": true, 00:24:16.315 "reset": true, 00:24:16.315 "compare": true, 00:24:16.315 "compare_and_write": true, 00:24:16.315 "abort": true, 00:24:16.315 "nvme_admin": true, 00:24:16.315 "nvme_io": true 00:24:16.315 }, 00:24:16.315 "driver_specific": { 00:24:16.315 "nvme": [ 00:24:16.315 { 00:24:16.315 "trid": { 00:24:16.315 "trtype": "TCP", 00:24:16.315 "adrfam": "IPv4", 00:24:16.315 "traddr": "10.0.0.2", 00:24:16.315 "trsvcid": "4421", 00:24:16.315 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.315 }, 00:24:16.315 "ctrlr_data": { 00:24:16.315 "cntlid": 3, 00:24:16.315 "vendor_id": "0x8086", 00:24:16.315 "model_number": "SPDK bdev Controller", 00:24:16.315 "serial_number": "00000000000000000000", 00:24:16.315 "firmware_revision": "24.01.1", 00:24:16.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.315 "oacs": { 00:24:16.315 "security": 0, 00:24:16.315 "format": 0, 00:24:16.315 "firmware": 0, 00:24:16.315 "ns_manage": 0 00:24:16.315 }, 00:24:16.315 "multi_ctrlr": true, 00:24:16.315 "ana_reporting": false 00:24:16.315 }, 00:24:16.315 "vs": { 00:24:16.315 "nvme_version": "1.3" 00:24:16.315 }, 00:24:16.315 "ns_data": { 00:24:16.315 "id": 1, 00:24:16.315 "can_share": true 00:24:16.315 } 00:24:16.315 } 00:24:16.315 ], 00:24:16.315 "mp_policy": "active_passive" 00:24:16.315 } 00:24:16.315 } 00:24:16.315 ] 00:24:16.315 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.315 17:49:37 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.315 17:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.315 17:49:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.315 17:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.315 17:49:37 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.N1nBPhkFG3 00:24:16.315 17:49:37 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:16.315 17:49:37 -- host/async_init.sh@78 -- # nvmftestfini 00:24:16.315 17:49:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:16.315 17:49:37 -- nvmf/common.sh@116 -- # sync 00:24:16.315 17:49:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:16.315 17:49:37 -- nvmf/common.sh@119 -- # set +e 00:24:16.315 17:49:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:16.315 17:49:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:16.315 rmmod nvme_tcp 00:24:16.315 rmmod nvme_fabrics 00:24:16.315 rmmod nvme_keyring 00:24:16.315 17:49:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:16.315 17:49:37 -- nvmf/common.sh@123 -- # set -e 00:24:16.315 17:49:37 -- nvmf/common.sh@124 -- # return 0 00:24:16.315 17:49:37 -- nvmf/common.sh@477 -- # '[' -n 713577 ']' 00:24:16.315 17:49:37 -- nvmf/common.sh@478 -- # killprocess 713577 00:24:16.315 17:49:37 -- common/autotest_common.sh@926 -- # '[' -z 713577 ']' 00:24:16.315 17:49:37 -- common/autotest_common.sh@930 -- # kill -0 713577 00:24:16.315 17:49:37 -- common/autotest_common.sh@931 -- # uname 00:24:16.315 17:49:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:16.315 17:49:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 713577 00:24:16.315 17:49:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:16.315 17:49:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:16.315 17:49:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 713577' 00:24:16.315 killing process with pid 713577 00:24:16.315 17:49:37 -- common/autotest_common.sh@945 -- # kill 713577 00:24:16.315 17:49:37 -- common/autotest_common.sh@950 -- # wait 713577 00:24:16.573 17:49:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:16.573 17:49:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:16.573 17:49:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:16.573 17:49:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.573 17:49:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:16.573 17:49:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.573 17:49:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.573 17:49:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.479 17:49:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:18.737 00:24:18.737 real 0m9.545s 00:24:18.737 user 0m3.517s 00:24:18.737 sys 0m4.554s 00:24:18.737 17:49:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.737 17:49:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.738 ************************************ 00:24:18.738 END TEST nvmf_async_init 00:24:18.738 ************************************ 00:24:18.738 17:49:40 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.738 17:49:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:18.738 17:49:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:18.738 17:49:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.738 ************************************ 00:24:18.738 START TEST dma 00:24:18.738 ************************************ 00:24:18.738 17:49:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.738 * Looking for test storage... 00:24:18.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.738 17:49:40 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.738 17:49:40 -- nvmf/common.sh@7 -- # uname -s 00:24:18.738 17:49:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.738 17:49:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.738 17:49:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.738 17:49:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.738 17:49:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.738 17:49:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.738 17:49:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.738 17:49:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.738 17:49:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.738 17:49:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.738 17:49:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.738 17:49:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.738 17:49:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.738 17:49:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.738 17:49:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.738 17:49:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.738 17:49:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.738 17:49:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.738 17:49:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.738 17:49:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.738 17:49:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.738 17:49:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.738 17:49:40 -- paths/export.sh@5 -- # export PATH 00:24:18.738 17:49:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.738 17:49:40 -- nvmf/common.sh@46 -- # : 0 00:24:18.738 17:49:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:18.738 17:49:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:18.738 17:49:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:18.738 17:49:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.738 17:49:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.738 17:49:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:18.738 17:49:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:18.738 17:49:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:18.738 17:49:40 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:18.738 17:49:40 -- host/dma.sh@13 -- # exit 0 00:24:18.738 00:24:18.738 real 0m0.109s 00:24:18.738 user 0m0.052s 00:24:18.738 sys 0m0.065s 00:24:18.738 17:49:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.738 17:49:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.738 ************************************ 00:24:18.738 END TEST dma 00:24:18.738 ************************************ 00:24:18.738 17:49:40 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.738 17:49:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:18.738 17:49:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:18.738 17:49:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.738 ************************************ 00:24:18.738 START TEST nvmf_identify 00:24:18.738 ************************************ 00:24:18.738 17:49:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.738 * Looking for test storage... 00:24:18.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.738 17:49:40 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.738 17:49:40 -- nvmf/common.sh@7 -- # uname -s 00:24:18.738 17:49:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.738 17:49:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.738 17:49:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.738 17:49:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.738 17:49:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.738 17:49:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.738 17:49:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.738 17:49:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.738 17:49:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.998 17:49:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.998 17:49:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.998 17:49:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.998 17:49:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.998 17:49:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.998 17:49:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.998 17:49:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.998 17:49:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.998 17:49:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.998 17:49:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.998 17:49:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.998 17:49:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.998 17:49:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.998 17:49:40 -- paths/export.sh@5 -- # export PATH 00:24:18.998 17:49:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.998 17:49:40 -- nvmf/common.sh@46 -- # : 0 00:24:18.998 17:49:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:18.998 17:49:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:18.998 17:49:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:18.998 17:49:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.998 17:49:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.998 17:49:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:18.998 17:49:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:18.998 17:49:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:18.998 17:49:40 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.998 17:49:40 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.998 17:49:40 -- host/identify.sh@14 -- # nvmftestinit 00:24:18.998 17:49:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:18.998 17:49:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.998 17:49:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:18.998 17:49:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:18.998 17:49:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:18.998 17:49:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.998 17:49:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.998 17:49:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.998 17:49:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:18.998 17:49:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:18.998 17:49:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:18.998 17:49:40 -- common/autotest_common.sh@10 -- # set +x 00:24:24.268 17:49:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:24.268 17:49:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:24.268 17:49:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:24.268 17:49:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:24.268 17:49:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:24.268 17:49:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:24.268 17:49:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:24.268 17:49:45 -- nvmf/common.sh@294 -- # net_devs=() 00:24:24.268 17:49:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:24.268 17:49:45 -- nvmf/common.sh@295 -- # e810=() 00:24:24.268 17:49:45 -- nvmf/common.sh@295 -- # local -ga e810 00:24:24.268 17:49:45 -- nvmf/common.sh@296 -- # x722=() 00:24:24.268 17:49:45 -- nvmf/common.sh@296 -- # local -ga x722 00:24:24.268 17:49:45 -- nvmf/common.sh@297 -- # mlx=() 00:24:24.268 17:49:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:24.268 17:49:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.268 17:49:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:24.268 17:49:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:24.268 17:49:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:24.268 17:49:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:24.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:24.268 17:49:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:24.268 17:49:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:24.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:24.268 17:49:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:24.268 17:49:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.268 17:49:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.268 17:49:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:24.268 Found net devices under 0000:86:00.0: cvl_0_0 00:24:24.268 17:49:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.268 17:49:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:24.268 17:49:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.268 17:49:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.268 17:49:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:24.268 Found net devices under 0000:86:00.1: cvl_0_1 00:24:24.268 17:49:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.268 17:49:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:24.268 17:49:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:24.268 17:49:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:24.268 17:49:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.268 17:49:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.268 17:49:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.268 17:49:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:24.268 17:49:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.268 17:49:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.268 17:49:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:24.268 17:49:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.268 17:49:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.268 17:49:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:24.268 17:49:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:24.268 17:49:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.268 17:49:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.268 17:49:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.268 17:49:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.268 17:49:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:24.268 17:49:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.269 17:49:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.269 17:49:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.269 17:49:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:24.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:24:24.269 00:24:24.269 --- 10.0.0.2 ping statistics --- 00:24:24.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.269 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:24.269 17:49:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:24:24.269 00:24:24.269 --- 10.0.0.1 ping statistics --- 00:24:24.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.269 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:24:24.269 17:49:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.269 17:49:45 -- nvmf/common.sh@410 -- # return 0 00:24:24.269 17:49:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:24.269 17:49:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.269 17:49:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:24.269 17:49:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:24.269 17:49:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.269 17:49:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:24.269 17:49:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:24.269 17:49:45 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:24.269 17:49:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:24.269 17:49:45 -- common/autotest_common.sh@10 -- # set +x 00:24:24.269 17:49:45 -- host/identify.sh@19 -- # nvmfpid=717185 00:24:24.269 17:49:45 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.269 17:49:45 -- host/identify.sh@23 -- # waitforlisten 717185 00:24:24.269 17:49:45 -- common/autotest_common.sh@819 -- # '[' -z 717185 ']' 00:24:24.269 17:49:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.269 17:49:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:24.269 17:49:45 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.269 17:49:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.269 17:49:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:24.269 17:49:45 -- common/autotest_common.sh@10 -- # set +x 00:24:24.269 [2024-07-24 17:49:45.424875] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:24.269 [2024-07-24 17:49:45.424920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.269 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.269 [2024-07-24 17:49:45.486063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.269 [2024-07-24 17:49:45.565087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:24.269 [2024-07-24 17:49:45.565189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.269 [2024-07-24 17:49:45.565197] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.269 [2024-07-24 17:49:45.565203] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.269 [2024-07-24 17:49:45.565242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.269 [2024-07-24 17:49:45.565338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.269 [2024-07-24 17:49:45.565436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.269 [2024-07-24 17:49:45.565437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.836 17:49:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:24.836 17:49:46 -- common/autotest_common.sh@852 -- # return 0 00:24:24.836 17:49:46 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 [2024-07-24 17:49:46.226204] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:24.836 17:49:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 17:49:46 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 Malloc0 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 [2024-07-24 17:49:46.313880] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.836 17:49:46 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:24.836 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.836 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 [2024-07-24 17:49:46.329726] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:24.836 [ 00:24:24.836 { 00:24:24.836 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.836 "subtype": "Discovery", 00:24:24.836 "listen_addresses": [ 00:24:24.836 { 00:24:24.836 "transport": "TCP", 00:24:24.836 "trtype": "TCP", 00:24:24.836 "adrfam": "IPv4", 00:24:24.836 "traddr": "10.0.0.2", 00:24:24.836 "trsvcid": "4420" 00:24:24.836 } 00:24:24.836 ], 00:24:24.836 "allow_any_host": true, 00:24:24.837 "hosts": [] 00:24:24.837 }, 00:24:24.837 { 00:24:24.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.837 "subtype": "NVMe", 00:24:24.837 "listen_addresses": [ 00:24:24.837 { 00:24:24.837 "transport": "TCP", 00:24:24.837 "trtype": "TCP", 00:24:24.837 "adrfam": "IPv4", 00:24:24.837 "traddr": "10.0.0.2", 00:24:24.837 "trsvcid": "4420" 00:24:24.837 } 00:24:24.837 ], 00:24:24.837 "allow_any_host": true, 00:24:24.837 "hosts": [], 00:24:24.837 "serial_number": "SPDK00000000000001", 00:24:24.837 "model_number": "SPDK bdev Controller", 00:24:24.837 "max_namespaces": 32, 00:24:24.837 "min_cntlid": 1, 00:24:24.837 "max_cntlid": 65519, 00:24:24.837 "namespaces": [ 00:24:24.837 { 00:24:24.837 "nsid": 1, 00:24:24.837 "bdev_name": "Malloc0", 00:24:24.837 "name": "Malloc0", 00:24:24.837 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:24.837 "eui64": "ABCDEF0123456789", 00:24:24.837 "uuid": "8bc1cb86-e6aa-4452-8769-cae1252adde4" 00:24:24.837 } 00:24:24.837 ] 00:24:24.837 } 00:24:24.837 ] 00:24:24.837 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.837 17:49:46 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:24.837 [2024-07-24 17:49:46.364373] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:24.837 [2024-07-24 17:49:46.364420] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717437 ] 00:24:24.837 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.837 [2024-07-24 17:49:46.394588] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:24.837 [2024-07-24 17:49:46.394635] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:24.837 [2024-07-24 17:49:46.394640] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:24.837 [2024-07-24 17:49:46.394651] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:24.837 [2024-07-24 17:49:46.394658] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:24.837 [2024-07-24 17:49:46.395277] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:24.837 [2024-07-24 17:49:46.395309] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20699e0 0 00:24:24.837 [2024-07-24 17:49:46.410055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:24.837 [2024-07-24 17:49:46.410072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:24.837 [2024-07-24 17:49:46.410077] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:24.837 [2024-07-24 17:49:46.410081] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:24.837 [2024-07-24 17:49:46.410117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.410123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.410126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.410140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:24.837 [2024-07-24 17:49:46.410158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.418054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.418063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.418066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.837 [2024-07-24 17:49:46.418084] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:24.837 [2024-07-24 17:49:46.418091] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:24.837 [2024-07-24 17:49:46.418097] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:24.837 [2024-07-24 17:49:46.418108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.418122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.837 [2024-07-24 17:49:46.418134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.418375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.418385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.418388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.837 [2024-07-24 17:49:46.418398] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:24.837 [2024-07-24 17:49:46.418406] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:24.837 [2024-07-24 17:49:46.418412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.418425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.837 [2024-07-24 17:49:46.418437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.418578] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.418588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.418591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.837 [2024-07-24 17:49:46.418600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:24.837 [2024-07-24 17:49:46.418609] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:24.837 [2024-07-24 17:49:46.418616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418620] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.418629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.837 [2024-07-24 17:49:46.418641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.418779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.418789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.418792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.837 [2024-07-24 17:49:46.418804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:24.837 [2024-07-24 17:49:46.418814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418818] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.418828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.837 [2024-07-24 17:49:46.418839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.418979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.418988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.418991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.418994] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.837 [2024-07-24 17:49:46.419000] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:24.837 [2024-07-24 17:49:46.419004] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:24.837 [2024-07-24 17:49:46.419012] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:24.837 [2024-07-24 17:49:46.419118] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:24.837 [2024-07-24 17:49:46.419123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:24.837 [2024-07-24 17:49:46.419132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.419136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.419139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.837 [2024-07-24 17:49:46.419145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.837 [2024-07-24 17:49:46.419158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.837 [2024-07-24 17:49:46.419300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.837 [2024-07-24 17:49:46.419309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.837 [2024-07-24 17:49:46.419312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.837 [2024-07-24 17:49:46.419316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.838 [2024-07-24 17:49:46.419322] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:24.838 [2024-07-24 17:49:46.419332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.419345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.838 [2024-07-24 17:49:46.419357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.838 [2024-07-24 17:49:46.419497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.838 [2024-07-24 17:49:46.419507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.838 [2024-07-24 17:49:46.419513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.838 [2024-07-24 17:49:46.419522] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:24.838 [2024-07-24 17:49:46.419526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.419534] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:24.838 [2024-07-24 17:49:46.419547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.419557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.419570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.838 [2024-07-24 17:49:46.419582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.838 [2024-07-24 17:49:46.419832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.838 [2024-07-24 17:49:46.419842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.838 [2024-07-24 17:49:46.419846] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419849] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20699e0): datao=0, datal=4096, cccid=0 00:24:24.838 [2024-07-24 17:49:46.419854] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d1730) on tqpair(0x20699e0): expected_datao=0, payload_size=4096 00:24:24.838 [2024-07-24 17:49:46.419861] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.419865] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.838 [2024-07-24 17:49:46.420154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.838 [2024-07-24 17:49:46.420156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.838 [2024-07-24 17:49:46.420168] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:24.838 [2024-07-24 17:49:46.420173] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:24.838 [2024-07-24 17:49:46.420177] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:24.838 [2024-07-24 17:49:46.420182] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:24.838 [2024-07-24 17:49:46.420186] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:24.838 [2024-07-24 17:49:46.420190] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.420202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.420209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.838 [2024-07-24 17:49:46.420237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.838 [2024-07-24 17:49:46.420389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.838 [2024-07-24 17:49:46.420399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.838 [2024-07-24 17:49:46.420402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1730) on tqpair=0x20699e0 00:24:24.838 [2024-07-24 17:49:46.420415] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.838 [2024-07-24 17:49:46.420433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.838 [2024-07-24 17:49:46.420448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.838 [2024-07-24 17:49:46.420464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.838 [2024-07-24 17:49:46.420479] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.420492] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:24.838 [2024-07-24 17:49:46.420498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20699e0) 00:24:24.838 [2024-07-24 17:49:46.420510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.838 [2024-07-24 17:49:46.420524] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1730, cid 0, qid 0 00:24:24.838 [2024-07-24 17:49:46.420529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1890, cid 1, qid 0 00:24:24.838 [2024-07-24 17:49:46.420533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d19f0, cid 2, qid 0 00:24:24.838 [2024-07-24 17:49:46.420537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:24.838 [2024-07-24 17:49:46.420540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1cb0, cid 4, qid 0 00:24:24.838 [2024-07-24 17:49:46.420723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.838 [2024-07-24 17:49:46.420733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.838 [2024-07-24 17:49:46.420736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.838 [2024-07-24 17:49:46.420739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1cb0) on tqpair=0x20699e0 00:24:24.838 [2024-07-24 17:49:46.420745] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:24.839 [2024-07-24 17:49:46.420750] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:24.839 [2024-07-24 17:49:46.420761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.420765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.420768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20699e0) 00:24:24.839 [2024-07-24 17:49:46.420775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.839 [2024-07-24 17:49:46.420787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1cb0, cid 4, qid 0 00:24:24.839 [2024-07-24 17:49:46.421023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.839 [2024-07-24 17:49:46.421033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.839 [2024-07-24 17:49:46.421036] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421039] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20699e0): datao=0, datal=4096, cccid=4 00:24:24.839 [2024-07-24 17:49:46.421049] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d1cb0) on tqpair(0x20699e0): expected_datao=0, payload_size=4096 00:24:24.839 [2024-07-24 17:49:46.421056] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421059] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.839 [2024-07-24 17:49:46.421316] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.839 [2024-07-24 17:49:46.421319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1cb0) on tqpair=0x20699e0 00:24:24.839 [2024-07-24 17:49:46.421336] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:24.839 [2024-07-24 17:49:46.421360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20699e0) 00:24:24.839 [2024-07-24 17:49:46.421374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.839 [2024-07-24 17:49:46.421380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20699e0) 00:24:24.839 [2024-07-24 17:49:46.421391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.839 [2024-07-24 17:49:46.421406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1cb0, cid 4, qid 0 00:24:24.839 [2024-07-24 17:49:46.421411] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1e10, cid 5, qid 0 00:24:24.839 [2024-07-24 17:49:46.421597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.839 [2024-07-24 17:49:46.421606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.839 [2024-07-24 17:49:46.421613] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421616] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20699e0): datao=0, datal=1024, cccid=4 00:24:24.839 [2024-07-24 17:49:46.421620] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d1cb0) on tqpair(0x20699e0): expected_datao=0, payload_size=1024 00:24:24.839 [2024-07-24 17:49:46.421626] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421630] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.839 [2024-07-24 17:49:46.421640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.839 [2024-07-24 17:49:46.421642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.839 [2024-07-24 17:49:46.421645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1e10) on tqpair=0x20699e0 00:24:25.102 [2024-07-24 17:49:46.466050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.102 [2024-07-24 17:49:46.466061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.102 [2024-07-24 17:49:46.466064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1cb0) on tqpair=0x20699e0 00:24:25.102 [2024-07-24 17:49:46.466080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20699e0) 00:24:25.102 [2024-07-24 17:49:46.466093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.102 [2024-07-24 17:49:46.466110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1cb0, cid 4, qid 0 00:24:25.102 [2024-07-24 17:49:46.466442] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.102 [2024-07-24 17:49:46.466453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.102 [2024-07-24 17:49:46.466456] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466459] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20699e0): datao=0, datal=3072, cccid=4 00:24:25.102 [2024-07-24 17:49:46.466463] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d1cb0) on tqpair(0x20699e0): expected_datao=0, payload_size=3072 00:24:25.102 [2024-07-24 17:49:46.466470] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466473] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466566] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.102 [2024-07-24 17:49:46.466575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.102 [2024-07-24 17:49:46.466578] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466582] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1cb0) on tqpair=0x20699e0 00:24:25.102 [2024-07-24 17:49:46.466593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20699e0) 00:24:25.102 [2024-07-24 17:49:46.466606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.102 [2024-07-24 17:49:46.466624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1cb0, cid 4, qid 0 00:24:25.102 [2024-07-24 17:49:46.466780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.102 [2024-07-24 17:49:46.466790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.102 [2024-07-24 17:49:46.466793] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466799] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20699e0): datao=0, datal=8, cccid=4 00:24:25.102 [2024-07-24 17:49:46.466803] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d1cb0) on tqpair(0x20699e0): expected_datao=0, payload_size=8 00:24:25.102 [2024-07-24 17:49:46.466810] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.466813] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.508266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.102 [2024-07-24 17:49:46.508281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.102 [2024-07-24 17:49:46.508284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.102 [2024-07-24 17:49:46.508288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1cb0) on tqpair=0x20699e0 00:24:25.102 ===================================================== 00:24:25.102 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:25.102 ===================================================== 00:24:25.102 Controller Capabilities/Features 00:24:25.102 ================================ 00:24:25.102 Vendor ID: 0000 00:24:25.102 Subsystem Vendor ID: 0000 00:24:25.102 Serial Number: .................... 00:24:25.102 Model Number: ........................................ 00:24:25.102 Firmware Version: 24.01.1 00:24:25.102 Recommended Arb Burst: 0 00:24:25.102 IEEE OUI Identifier: 00 00 00 00:24:25.102 Multi-path I/O 00:24:25.102 May have multiple subsystem ports: No 00:24:25.102 May have multiple controllers: No 00:24:25.102 Associated with SR-IOV VF: No 00:24:25.102 Max Data Transfer Size: 131072 00:24:25.102 Max Number of Namespaces: 0 00:24:25.102 Max Number of I/O Queues: 1024 00:24:25.102 NVMe Specification Version (VS): 1.3 00:24:25.102 NVMe Specification Version (Identify): 1.3 00:24:25.102 Maximum Queue Entries: 128 00:24:25.102 Contiguous Queues Required: Yes 00:24:25.102 Arbitration Mechanisms Supported 00:24:25.102 Weighted Round Robin: Not Supported 00:24:25.102 Vendor Specific: Not Supported 00:24:25.102 Reset Timeout: 15000 ms 00:24:25.102 Doorbell Stride: 4 bytes 00:24:25.102 NVM Subsystem Reset: Not Supported 00:24:25.102 Command Sets Supported 00:24:25.102 NVM Command Set: Supported 00:24:25.102 Boot Partition: Not Supported 00:24:25.102 Memory Page Size Minimum: 4096 bytes 00:24:25.102 Memory Page Size Maximum: 4096 bytes 00:24:25.102 Persistent Memory Region: Not Supported 00:24:25.102 Optional Asynchronous Events Supported 00:24:25.102 Namespace Attribute Notices: Not Supported 00:24:25.102 Firmware Activation Notices: Not Supported 00:24:25.102 ANA Change Notices: Not Supported 00:24:25.102 PLE Aggregate Log Change Notices: Not Supported 00:24:25.102 LBA Status Info Alert Notices: Not Supported 00:24:25.102 EGE Aggregate Log Change Notices: Not Supported 00:24:25.102 Normal NVM Subsystem Shutdown event: Not Supported 00:24:25.102 Zone Descriptor Change Notices: Not Supported 00:24:25.102 Discovery Log Change Notices: Supported 00:24:25.102 Controller Attributes 00:24:25.102 128-bit Host Identifier: Not Supported 00:24:25.102 Non-Operational Permissive Mode: Not Supported 00:24:25.102 NVM Sets: Not Supported 00:24:25.102 Read Recovery Levels: Not Supported 00:24:25.102 Endurance Groups: Not Supported 00:24:25.102 Predictable Latency Mode: Not Supported 00:24:25.102 Traffic Based Keep ALive: Not Supported 00:24:25.102 Namespace Granularity: Not Supported 00:24:25.102 SQ Associations: Not Supported 00:24:25.102 UUID List: Not Supported 00:24:25.102 Multi-Domain Subsystem: Not Supported 00:24:25.102 Fixed Capacity Management: Not Supported 00:24:25.102 Variable Capacity Management: Not Supported 00:24:25.102 Delete Endurance Group: Not Supported 00:24:25.102 Delete NVM Set: Not Supported 00:24:25.102 Extended LBA Formats Supported: Not Supported 00:24:25.103 Flexible Data Placement Supported: Not Supported 00:24:25.103 00:24:25.103 Controller Memory Buffer Support 00:24:25.103 ================================ 00:24:25.103 Supported: No 00:24:25.103 00:24:25.103 Persistent Memory Region Support 00:24:25.103 ================================ 00:24:25.103 Supported: No 00:24:25.103 00:24:25.103 Admin Command Set Attributes 00:24:25.103 ============================ 00:24:25.103 Security Send/Receive: Not Supported 00:24:25.103 Format NVM: Not Supported 00:24:25.103 Firmware Activate/Download: Not Supported 00:24:25.103 Namespace Management: Not Supported 00:24:25.103 Device Self-Test: Not Supported 00:24:25.103 Directives: Not Supported 00:24:25.103 NVMe-MI: Not Supported 00:24:25.103 Virtualization Management: Not Supported 00:24:25.103 Doorbell Buffer Config: Not Supported 00:24:25.103 Get LBA Status Capability: Not Supported 00:24:25.103 Command & Feature Lockdown Capability: Not Supported 00:24:25.103 Abort Command Limit: 1 00:24:25.103 Async Event Request Limit: 4 00:24:25.103 Number of Firmware Slots: N/A 00:24:25.103 Firmware Slot 1 Read-Only: N/A 00:24:25.103 Firmware Activation Without Reset: N/A 00:24:25.103 Multiple Update Detection Support: N/A 00:24:25.103 Firmware Update Granularity: No Information Provided 00:24:25.103 Per-Namespace SMART Log: No 00:24:25.103 Asymmetric Namespace Access Log Page: Not Supported 00:24:25.103 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:25.103 Command Effects Log Page: Not Supported 00:24:25.103 Get Log Page Extended Data: Supported 00:24:25.103 Telemetry Log Pages: Not Supported 00:24:25.103 Persistent Event Log Pages: Not Supported 00:24:25.103 Supported Log Pages Log Page: May Support 00:24:25.103 Commands Supported & Effects Log Page: Not Supported 00:24:25.103 Feature Identifiers & Effects Log Page:May Support 00:24:25.103 NVMe-MI Commands & Effects Log Page: May Support 00:24:25.103 Data Area 4 for Telemetry Log: Not Supported 00:24:25.103 Error Log Page Entries Supported: 128 00:24:25.103 Keep Alive: Not Supported 00:24:25.103 00:24:25.103 NVM Command Set Attributes 00:24:25.103 ========================== 00:24:25.103 Submission Queue Entry Size 00:24:25.103 Max: 1 00:24:25.103 Min: 1 00:24:25.103 Completion Queue Entry Size 00:24:25.103 Max: 1 00:24:25.103 Min: 1 00:24:25.103 Number of Namespaces: 0 00:24:25.103 Compare Command: Not Supported 00:24:25.103 Write Uncorrectable Command: Not Supported 00:24:25.103 Dataset Management Command: Not Supported 00:24:25.103 Write Zeroes Command: Not Supported 00:24:25.103 Set Features Save Field: Not Supported 00:24:25.103 Reservations: Not Supported 00:24:25.103 Timestamp: Not Supported 00:24:25.103 Copy: Not Supported 00:24:25.103 Volatile Write Cache: Not Present 00:24:25.103 Atomic Write Unit (Normal): 1 00:24:25.103 Atomic Write Unit (PFail): 1 00:24:25.103 Atomic Compare & Write Unit: 1 00:24:25.103 Fused Compare & Write: Supported 00:24:25.103 Scatter-Gather List 00:24:25.103 SGL Command Set: Supported 00:24:25.103 SGL Keyed: Supported 00:24:25.103 SGL Bit Bucket Descriptor: Not Supported 00:24:25.103 SGL Metadata Pointer: Not Supported 00:24:25.103 Oversized SGL: Not Supported 00:24:25.103 SGL Metadata Address: Not Supported 00:24:25.103 SGL Offset: Supported 00:24:25.103 Transport SGL Data Block: Not Supported 00:24:25.103 Replay Protected Memory Block: Not Supported 00:24:25.103 00:24:25.103 Firmware Slot Information 00:24:25.103 ========================= 00:24:25.103 Active slot: 0 00:24:25.103 00:24:25.103 00:24:25.103 Error Log 00:24:25.103 ========= 00:24:25.103 00:24:25.103 Active Namespaces 00:24:25.103 ================= 00:24:25.103 Discovery Log Page 00:24:25.103 ================== 00:24:25.103 Generation Counter: 2 00:24:25.103 Number of Records: 2 00:24:25.103 Record Format: 0 00:24:25.103 00:24:25.103 Discovery Log Entry 0 00:24:25.103 ---------------------- 00:24:25.103 Transport Type: 3 (TCP) 00:24:25.103 Address Family: 1 (IPv4) 00:24:25.103 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:25.103 Entry Flags: 00:24:25.103 Duplicate Returned Information: 1 00:24:25.103 Explicit Persistent Connection Support for Discovery: 1 00:24:25.103 Transport Requirements: 00:24:25.103 Secure Channel: Not Required 00:24:25.103 Port ID: 0 (0x0000) 00:24:25.103 Controller ID: 65535 (0xffff) 00:24:25.103 Admin Max SQ Size: 128 00:24:25.103 Transport Service Identifier: 4420 00:24:25.103 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:25.103 Transport Address: 10.0.0.2 00:24:25.103 Discovery Log Entry 1 00:24:25.103 ---------------------- 00:24:25.103 Transport Type: 3 (TCP) 00:24:25.103 Address Family: 1 (IPv4) 00:24:25.103 Subsystem Type: 2 (NVM Subsystem) 00:24:25.103 Entry Flags: 00:24:25.103 Duplicate Returned Information: 0 00:24:25.103 Explicit Persistent Connection Support for Discovery: 0 00:24:25.103 Transport Requirements: 00:24:25.103 Secure Channel: Not Required 00:24:25.103 Port ID: 0 (0x0000) 00:24:25.103 Controller ID: 65535 (0xffff) 00:24:25.103 Admin Max SQ Size: 128 00:24:25.103 Transport Service Identifier: 4420 00:24:25.103 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:25.103 Transport Address: 10.0.0.2 [2024-07-24 17:49:46.508372] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:25.103 [2024-07-24 17:49:46.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.103 [2024-07-24 17:49:46.508390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.103 [2024-07-24 17:49:46.508396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.103 [2024-07-24 17:49:46.508401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.103 [2024-07-24 17:49:46.508408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.103 [2024-07-24 17:49:46.508421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.103 [2024-07-24 17:49:46.508435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.103 [2024-07-24 17:49:46.508588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.103 [2024-07-24 17:49:46.508598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.103 [2024-07-24 17:49:46.508601] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.103 [2024-07-24 17:49:46.508612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508615] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.103 [2024-07-24 17:49:46.508625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.103 [2024-07-24 17:49:46.508641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.103 [2024-07-24 17:49:46.508788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.103 [2024-07-24 17:49:46.508797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.103 [2024-07-24 17:49:46.508800] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508804] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.103 [2024-07-24 17:49:46.508810] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:25.103 [2024-07-24 17:49:46.508814] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:25.103 [2024-07-24 17:49:46.508824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.508833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.103 [2024-07-24 17:49:46.508840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.103 [2024-07-24 17:49:46.508851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.103 [2024-07-24 17:49:46.508994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.103 [2024-07-24 17:49:46.509003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.103 [2024-07-24 17:49:46.509006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.509010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.103 [2024-07-24 17:49:46.509022] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.103 [2024-07-24 17:49:46.509026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.509036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.509054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.509197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.509207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.509210] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.509225] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509229] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.509238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.509250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.509392] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.509402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.509405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509408] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.509419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509426] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.509433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.509444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.509588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.509598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.509601] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.509615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.509632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.509643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.509786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.509796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.509799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509802] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.509813] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509820] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.509826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.509838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.509983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.509993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.509996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.509999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.510011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.510014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.510017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.510024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.510035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.514052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.514059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.514062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.514065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.514075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.514078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.514082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20699e0) 00:24:25.104 [2024-07-24 17:49:46.514088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.514099] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d1b50, cid 3, qid 0 00:24:25.104 [2024-07-24 17:49:46.514312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.514322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.514325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.514329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d1b50) on tqpair=0x20699e0 00:24:25.104 [2024-07-24 17:49:46.514338] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:25.104 00:24:25.104 17:49:46 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:25.104 [2024-07-24 17:49:46.548243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:25.104 [2024-07-24 17:49:46.548276] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717449 ] 00:24:25.104 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.104 [2024-07-24 17:49:46.576832] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:25.104 [2024-07-24 17:49:46.576873] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:25.104 [2024-07-24 17:49:46.576878] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:25.104 [2024-07-24 17:49:46.576888] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:25.104 [2024-07-24 17:49:46.576895] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:25.104 [2024-07-24 17:49:46.577388] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:25.104 [2024-07-24 17:49:46.577418] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11f79e0 0 00:24:25.104 [2024-07-24 17:49:46.592049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:25.104 [2024-07-24 17:49:46.592060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:25.104 [2024-07-24 17:49:46.592063] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:25.104 [2024-07-24 17:49:46.592066] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:25.104 [2024-07-24 17:49:46.592092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.592097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.592101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.104 [2024-07-24 17:49:46.592111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:25.104 [2024-07-24 17:49:46.592125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.104 [2024-07-24 17:49:46.600051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.600059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.600062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.600065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.104 [2024-07-24 17:49:46.600074] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:25.104 [2024-07-24 17:49:46.600079] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:25.104 [2024-07-24 17:49:46.600083] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:25.104 [2024-07-24 17:49:46.600093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.600096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.600100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.104 [2024-07-24 17:49:46.600106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.104 [2024-07-24 17:49:46.600118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.104 [2024-07-24 17:49:46.600357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.104 [2024-07-24 17:49:46.600370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.104 [2024-07-24 17:49:46.600373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.104 [2024-07-24 17:49:46.600377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.104 [2024-07-24 17:49:46.600383] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:25.105 [2024-07-24 17:49:46.600392] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:25.105 [2024-07-24 17:49:46.600399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600405] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.600412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.600427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.600569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.600579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.600582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.600591] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:25.105 [2024-07-24 17:49:46.600599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.600606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.600618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.600631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.600772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.600782] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.600785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.600794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.600804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.600818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.600830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.600977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.600987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.600990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.600996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.601001] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:25.105 [2024-07-24 17:49:46.601005] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.601013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.601118] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:25.105 [2024-07-24 17:49:46.601122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.601129] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601136] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.601142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.601155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.601368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.601377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.601381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.601389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:25.105 [2024-07-24 17:49:46.601399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.601413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.601425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.601569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.601578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.601581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.601589] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:25.105 [2024-07-24 17:49:46.601593] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:25.105 [2024-07-24 17:49:46.601602] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:25.105 [2024-07-24 17:49:46.601610] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:25.105 [2024-07-24 17:49:46.601618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601624] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.601633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.105 [2024-07-24 17:49:46.601646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.601905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.105 [2024-07-24 17:49:46.601916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.105 [2024-07-24 17:49:46.601919] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601922] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=4096, cccid=0 00:24:25.105 [2024-07-24 17:49:46.601926] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125f730) on tqpair(0x11f79e0): expected_datao=0, payload_size=4096 00:24:25.105 [2024-07-24 17:49:46.601933] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.601936] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.602209] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.602212] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.602222] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:25.105 [2024-07-24 17:49:46.602226] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:25.105 [2024-07-24 17:49:46.602230] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:25.105 [2024-07-24 17:49:46.602233] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:25.105 [2024-07-24 17:49:46.602237] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:25.105 [2024-07-24 17:49:46.602241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:25.105 [2024-07-24 17:49:46.602252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:25.105 [2024-07-24 17:49:46.602259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.602272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:25.105 [2024-07-24 17:49:46.602285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.105 [2024-07-24 17:49:46.602425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.105 [2024-07-24 17:49:46.602434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.105 [2024-07-24 17:49:46.602438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125f730) on tqpair=0x11f79e0 00:24:25.105 [2024-07-24 17:49:46.602448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.602460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.105 [2024-07-24 17:49:46.602465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.105 [2024-07-24 17:49:46.602474] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11f79e0) 00:24:25.105 [2024-07-24 17:49:46.602479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.106 [2024-07-24 17:49:46.602484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.602495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.106 [2024-07-24 17:49:46.602500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.602510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.106 [2024-07-24 17:49:46.602515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.602526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.602532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.602544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.106 [2024-07-24 17:49:46.602557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f730, cid 0, qid 0 00:24:25.106 [2024-07-24 17:49:46.602562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f890, cid 1, qid 0 00:24:25.106 [2024-07-24 17:49:46.602566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f9f0, cid 2, qid 0 00:24:25.106 [2024-07-24 17:49:46.602570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.106 [2024-07-24 17:49:46.602574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.106 [2024-07-24 17:49:46.602754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.106 [2024-07-24 17:49:46.602764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.106 [2024-07-24 17:49:46.602767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.106 [2024-07-24 17:49:46.602775] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:25.106 [2024-07-24 17:49:46.602780] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.602788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.602797] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.602803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.602817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:25.106 [2024-07-24 17:49:46.602830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.106 [2024-07-24 17:49:46.602970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.106 [2024-07-24 17:49:46.602980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.106 [2024-07-24 17:49:46.602983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.602986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.106 [2024-07-24 17:49:46.603037] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.603053] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.603061] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.603065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.603068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.603074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.106 [2024-07-24 17:49:46.603087] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.106 [2024-07-24 17:49:46.603245] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.106 [2024-07-24 17:49:46.603255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.106 [2024-07-24 17:49:46.603259] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.603262] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=4096, cccid=4 00:24:25.106 [2024-07-24 17:49:46.603265] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125fcb0) on tqpair(0x11f79e0): expected_datao=0, payload_size=4096 00:24:25.106 [2024-07-24 17:49:46.603503] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.603507] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.106 [2024-07-24 17:49:46.648063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.106 [2024-07-24 17:49:46.648066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.106 [2024-07-24 17:49:46.648084] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:25.106 [2024-07-24 17:49:46.648095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.648103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.648110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.648123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.106 [2024-07-24 17:49:46.648136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.106 [2024-07-24 17:49:46.648571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.106 [2024-07-24 17:49:46.648579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.106 [2024-07-24 17:49:46.648582] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648585] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=4096, cccid=4 00:24:25.106 [2024-07-24 17:49:46.648589] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125fcb0) on tqpair(0x11f79e0): expected_datao=0, payload_size=4096 00:24:25.106 [2024-07-24 17:49:46.648826] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.648830] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.690262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.106 [2024-07-24 17:49:46.690276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.106 [2024-07-24 17:49:46.690279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.690282] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.106 [2024-07-24 17:49:46.690302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.690314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:25.106 [2024-07-24 17:49:46.690322] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.690325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.690328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.106 [2024-07-24 17:49:46.690335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.106 [2024-07-24 17:49:46.690348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.106 [2024-07-24 17:49:46.690501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.106 [2024-07-24 17:49:46.690511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.106 [2024-07-24 17:49:46.690515] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.106 [2024-07-24 17:49:46.690518] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=4096, cccid=4 00:24:25.106 [2024-07-24 17:49:46.690522] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125fcb0) on tqpair(0x11f79e0): expected_datao=0, payload_size=4096 00:24:25.106 [2024-07-24 17:49:46.690768] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.107 [2024-07-24 17:49:46.690772] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.367 [2024-07-24 17:49:46.735063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.367 [2024-07-24 17:49:46.735066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.367 [2024-07-24 17:49:46.735079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735087] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735113] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:25.367 [2024-07-24 17:49:46.735117] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:25.367 [2024-07-24 17:49:46.735122] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:25.367 [2024-07-24 17:49:46.735135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.367 [2024-07-24 17:49:46.735148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.367 [2024-07-24 17:49:46.735154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f79e0) 00:24:25.367 [2024-07-24 17:49:46.735165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.367 [2024-07-24 17:49:46.735179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.367 [2024-07-24 17:49:46.735183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fe10, cid 5, qid 0 00:24:25.367 [2024-07-24 17:49:46.735343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.367 [2024-07-24 17:49:46.735353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.367 [2024-07-24 17:49:46.735356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.367 [2024-07-24 17:49:46.735366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.367 [2024-07-24 17:49:46.735371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.367 [2024-07-24 17:49:46.735374] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fe10) on tqpair=0x11f79e0 00:24:25.367 [2024-07-24 17:49:46.735388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f79e0) 00:24:25.367 [2024-07-24 17:49:46.735401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.367 [2024-07-24 17:49:46.735414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fe10, cid 5, qid 0 00:24:25.367 [2024-07-24 17:49:46.735564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.367 [2024-07-24 17:49:46.735573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.367 [2024-07-24 17:49:46.735576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fe10) on tqpair=0x11f79e0 00:24:25.367 [2024-07-24 17:49:46.735591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f79e0) 00:24:25.367 [2024-07-24 17:49:46.735604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.367 [2024-07-24 17:49:46.735616] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fe10, cid 5, qid 0 00:24:25.367 [2024-07-24 17:49:46.735763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.367 [2024-07-24 17:49:46.735773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.367 [2024-07-24 17:49:46.735776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.367 [2024-07-24 17:49:46.735779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fe10) on tqpair=0x11f79e0 00:24:25.368 [2024-07-24 17:49:46.735790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.735794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.735797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f79e0) 00:24:25.368 [2024-07-24 17:49:46.735803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.368 [2024-07-24 17:49:46.735815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fe10, cid 5, qid 0 00:24:25.368 [2024-07-24 17:49:46.735957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.368 [2024-07-24 17:49:46.735966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.368 [2024-07-24 17:49:46.735969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.735973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fe10) on tqpair=0x11f79e0 00:24:25.368 [2024-07-24 17:49:46.735988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.735992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.735995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f79e0) 00:24:25.368 [2024-07-24 17:49:46.736001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.368 [2024-07-24 17:49:46.736007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f79e0) 00:24:25.368 [2024-07-24 17:49:46.736018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.368 [2024-07-24 17:49:46.736025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11f79e0) 00:24:25.368 [2024-07-24 17:49:46.736036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.368 [2024-07-24 17:49:46.736047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f79e0) 00:24:25.368 [2024-07-24 17:49:46.736059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.368 [2024-07-24 17:49:46.736072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fe10, cid 5, qid 0 00:24:25.368 [2024-07-24 17:49:46.736077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fcb0, cid 4, qid 0 00:24:25.368 [2024-07-24 17:49:46.736081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ff70, cid 6, qid 0 00:24:25.368 [2024-07-24 17:49:46.736085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12600d0, cid 7, qid 0 00:24:25.368 [2024-07-24 17:49:46.736303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.368 [2024-07-24 17:49:46.736314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.368 [2024-07-24 17:49:46.736320] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736323] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=8192, cccid=5 00:24:25.368 [2024-07-24 17:49:46.736327] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125fe10) on tqpair(0x11f79e0): expected_datao=0, payload_size=8192 00:24:25.368 [2024-07-24 17:49:46.736873] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736877] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.368 [2024-07-24 17:49:46.736886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.368 [2024-07-24 17:49:46.736889] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736892] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=512, cccid=4 00:24:25.368 [2024-07-24 17:49:46.736896] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125fcb0) on tqpair(0x11f79e0): expected_datao=0, payload_size=512 00:24:25.368 [2024-07-24 17:49:46.736902] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736905] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.368 [2024-07-24 17:49:46.736915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.368 [2024-07-24 17:49:46.736918] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736921] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=512, cccid=6 00:24:25.368 [2024-07-24 17:49:46.736924] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125ff70) on tqpair(0x11f79e0): expected_datao=0, payload_size=512 00:24:25.368 [2024-07-24 17:49:46.736930] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736933] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:25.368 [2024-07-24 17:49:46.736943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:25.368 [2024-07-24 17:49:46.736946] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736949] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f79e0): datao=0, datal=4096, cccid=7 00:24:25.368 [2024-07-24 17:49:46.736952] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12600d0) on tqpair(0x11f79e0): expected_datao=0, payload_size=4096 00:24:25.368 [2024-07-24 17:49:46.736958] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.736962] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.737184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.368 [2024-07-24 17:49:46.737190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.368 [2024-07-24 17:49:46.737192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.737196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fe10) on tqpair=0x11f79e0 00:24:25.368 [2024-07-24 17:49:46.737208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.368 [2024-07-24 17:49:46.737213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.368 [2024-07-24 17:49:46.737216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.737220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fcb0) on tqpair=0x11f79e0 00:24:25.368 [2024-07-24 17:49:46.737227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.368 [2024-07-24 17:49:46.737232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.368 [2024-07-24 17:49:46.737235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.737240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125ff70) on tqpair=0x11f79e0 00:24:25.368 [2024-07-24 17:49:46.737247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.368 [2024-07-24 17:49:46.737251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.368 [2024-07-24 17:49:46.737254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.368 [2024-07-24 17:49:46.737258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12600d0) on tqpair=0x11f79e0 00:24:25.368 ===================================================== 00:24:25.368 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.368 ===================================================== 00:24:25.368 Controller Capabilities/Features 00:24:25.368 ================================ 00:24:25.368 Vendor ID: 8086 00:24:25.368 Subsystem Vendor ID: 8086 00:24:25.368 Serial Number: SPDK00000000000001 00:24:25.368 Model Number: SPDK bdev Controller 00:24:25.368 Firmware Version: 24.01.1 00:24:25.368 Recommended Arb Burst: 6 00:24:25.368 IEEE OUI Identifier: e4 d2 5c 00:24:25.368 Multi-path I/O 00:24:25.368 May have multiple subsystem ports: Yes 00:24:25.368 May have multiple controllers: Yes 00:24:25.368 Associated with SR-IOV VF: No 00:24:25.368 Max Data Transfer Size: 131072 00:24:25.368 Max Number of Namespaces: 32 00:24:25.368 Max Number of I/O Queues: 127 00:24:25.368 NVMe Specification Version (VS): 1.3 00:24:25.368 NVMe Specification Version (Identify): 1.3 00:24:25.368 Maximum Queue Entries: 128 00:24:25.368 Contiguous Queues Required: Yes 00:24:25.368 Arbitration Mechanisms Supported 00:24:25.368 Weighted Round Robin: Not Supported 00:24:25.368 Vendor Specific: Not Supported 00:24:25.368 Reset Timeout: 15000 ms 00:24:25.368 Doorbell Stride: 4 bytes 00:24:25.368 NVM Subsystem Reset: Not Supported 00:24:25.368 Command Sets Supported 00:24:25.368 NVM Command Set: Supported 00:24:25.368 Boot Partition: Not Supported 00:24:25.368 Memory Page Size Minimum: 4096 bytes 00:24:25.368 Memory Page Size Maximum: 4096 bytes 00:24:25.368 Persistent Memory Region: Not Supported 00:24:25.368 Optional Asynchronous Events Supported 00:24:25.368 Namespace Attribute Notices: Supported 00:24:25.368 Firmware Activation Notices: Not Supported 00:24:25.368 ANA Change Notices: Not Supported 00:24:25.368 PLE Aggregate Log Change Notices: Not Supported 00:24:25.368 LBA Status Info Alert Notices: Not Supported 00:24:25.368 EGE Aggregate Log Change Notices: Not Supported 00:24:25.368 Normal NVM Subsystem Shutdown event: Not Supported 00:24:25.368 Zone Descriptor Change Notices: Not Supported 00:24:25.368 Discovery Log Change Notices: Not Supported 00:24:25.368 Controller Attributes 00:24:25.368 128-bit Host Identifier: Supported 00:24:25.368 Non-Operational Permissive Mode: Not Supported 00:24:25.368 NVM Sets: Not Supported 00:24:25.368 Read Recovery Levels: Not Supported 00:24:25.368 Endurance Groups: Not Supported 00:24:25.368 Predictable Latency Mode: Not Supported 00:24:25.369 Traffic Based Keep ALive: Not Supported 00:24:25.369 Namespace Granularity: Not Supported 00:24:25.369 SQ Associations: Not Supported 00:24:25.369 UUID List: Not Supported 00:24:25.369 Multi-Domain Subsystem: Not Supported 00:24:25.369 Fixed Capacity Management: Not Supported 00:24:25.369 Variable Capacity Management: Not Supported 00:24:25.369 Delete Endurance Group: Not Supported 00:24:25.369 Delete NVM Set: Not Supported 00:24:25.369 Extended LBA Formats Supported: Not Supported 00:24:25.369 Flexible Data Placement Supported: Not Supported 00:24:25.369 00:24:25.369 Controller Memory Buffer Support 00:24:25.369 ================================ 00:24:25.369 Supported: No 00:24:25.369 00:24:25.369 Persistent Memory Region Support 00:24:25.369 ================================ 00:24:25.369 Supported: No 00:24:25.369 00:24:25.369 Admin Command Set Attributes 00:24:25.369 ============================ 00:24:25.369 Security Send/Receive: Not Supported 00:24:25.369 Format NVM: Not Supported 00:24:25.369 Firmware Activate/Download: Not Supported 00:24:25.369 Namespace Management: Not Supported 00:24:25.369 Device Self-Test: Not Supported 00:24:25.369 Directives: Not Supported 00:24:25.369 NVMe-MI: Not Supported 00:24:25.369 Virtualization Management: Not Supported 00:24:25.369 Doorbell Buffer Config: Not Supported 00:24:25.369 Get LBA Status Capability: Not Supported 00:24:25.369 Command & Feature Lockdown Capability: Not Supported 00:24:25.369 Abort Command Limit: 4 00:24:25.369 Async Event Request Limit: 4 00:24:25.369 Number of Firmware Slots: N/A 00:24:25.369 Firmware Slot 1 Read-Only: N/A 00:24:25.369 Firmware Activation Without Reset: N/A 00:24:25.369 Multiple Update Detection Support: N/A 00:24:25.369 Firmware Update Granularity: No Information Provided 00:24:25.369 Per-Namespace SMART Log: No 00:24:25.369 Asymmetric Namespace Access Log Page: Not Supported 00:24:25.369 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:25.369 Command Effects Log Page: Supported 00:24:25.369 Get Log Page Extended Data: Supported 00:24:25.369 Telemetry Log Pages: Not Supported 00:24:25.369 Persistent Event Log Pages: Not Supported 00:24:25.369 Supported Log Pages Log Page: May Support 00:24:25.369 Commands Supported & Effects Log Page: Not Supported 00:24:25.369 Feature Identifiers & Effects Log Page:May Support 00:24:25.369 NVMe-MI Commands & Effects Log Page: May Support 00:24:25.369 Data Area 4 for Telemetry Log: Not Supported 00:24:25.369 Error Log Page Entries Supported: 128 00:24:25.369 Keep Alive: Supported 00:24:25.369 Keep Alive Granularity: 10000 ms 00:24:25.369 00:24:25.369 NVM Command Set Attributes 00:24:25.369 ========================== 00:24:25.369 Submission Queue Entry Size 00:24:25.369 Max: 64 00:24:25.369 Min: 64 00:24:25.369 Completion Queue Entry Size 00:24:25.369 Max: 16 00:24:25.369 Min: 16 00:24:25.369 Number of Namespaces: 32 00:24:25.369 Compare Command: Supported 00:24:25.369 Write Uncorrectable Command: Not Supported 00:24:25.369 Dataset Management Command: Supported 00:24:25.369 Write Zeroes Command: Supported 00:24:25.369 Set Features Save Field: Not Supported 00:24:25.369 Reservations: Supported 00:24:25.369 Timestamp: Not Supported 00:24:25.369 Copy: Supported 00:24:25.369 Volatile Write Cache: Present 00:24:25.369 Atomic Write Unit (Normal): 1 00:24:25.369 Atomic Write Unit (PFail): 1 00:24:25.369 Atomic Compare & Write Unit: 1 00:24:25.369 Fused Compare & Write: Supported 00:24:25.369 Scatter-Gather List 00:24:25.369 SGL Command Set: Supported 00:24:25.369 SGL Keyed: Supported 00:24:25.369 SGL Bit Bucket Descriptor: Not Supported 00:24:25.369 SGL Metadata Pointer: Not Supported 00:24:25.369 Oversized SGL: Not Supported 00:24:25.369 SGL Metadata Address: Not Supported 00:24:25.369 SGL Offset: Supported 00:24:25.369 Transport SGL Data Block: Not Supported 00:24:25.369 Replay Protected Memory Block: Not Supported 00:24:25.369 00:24:25.369 Firmware Slot Information 00:24:25.369 ========================= 00:24:25.369 Active slot: 1 00:24:25.369 Slot 1 Firmware Revision: 24.01.1 00:24:25.369 00:24:25.369 00:24:25.369 Commands Supported and Effects 00:24:25.369 ============================== 00:24:25.369 Admin Commands 00:24:25.369 -------------- 00:24:25.369 Get Log Page (02h): Supported 00:24:25.369 Identify (06h): Supported 00:24:25.369 Abort (08h): Supported 00:24:25.369 Set Features (09h): Supported 00:24:25.369 Get Features (0Ah): Supported 00:24:25.369 Asynchronous Event Request (0Ch): Supported 00:24:25.369 Keep Alive (18h): Supported 00:24:25.369 I/O Commands 00:24:25.369 ------------ 00:24:25.369 Flush (00h): Supported LBA-Change 00:24:25.369 Write (01h): Supported LBA-Change 00:24:25.369 Read (02h): Supported 00:24:25.369 Compare (05h): Supported 00:24:25.369 Write Zeroes (08h): Supported LBA-Change 00:24:25.369 Dataset Management (09h): Supported LBA-Change 00:24:25.369 Copy (19h): Supported LBA-Change 00:24:25.369 Unknown (79h): Supported LBA-Change 00:24:25.369 Unknown (7Ah): Supported 00:24:25.369 00:24:25.369 Error Log 00:24:25.369 ========= 00:24:25.369 00:24:25.369 Arbitration 00:24:25.369 =========== 00:24:25.369 Arbitration Burst: 1 00:24:25.369 00:24:25.369 Power Management 00:24:25.369 ================ 00:24:25.369 Number of Power States: 1 00:24:25.369 Current Power State: Power State #0 00:24:25.369 Power State #0: 00:24:25.369 Max Power: 0.00 W 00:24:25.369 Non-Operational State: Operational 00:24:25.369 Entry Latency: Not Reported 00:24:25.369 Exit Latency: Not Reported 00:24:25.369 Relative Read Throughput: 0 00:24:25.369 Relative Read Latency: 0 00:24:25.369 Relative Write Throughput: 0 00:24:25.369 Relative Write Latency: 0 00:24:25.369 Idle Power: Not Reported 00:24:25.369 Active Power: Not Reported 00:24:25.369 Non-Operational Permissive Mode: Not Supported 00:24:25.369 00:24:25.369 Health Information 00:24:25.369 ================== 00:24:25.369 Critical Warnings: 00:24:25.369 Available Spare Space: OK 00:24:25.369 Temperature: OK 00:24:25.369 Device Reliability: OK 00:24:25.369 Read Only: No 00:24:25.369 Volatile Memory Backup: OK 00:24:25.369 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:25.369 Temperature Threshold: [2024-07-24 17:49:46.737345] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737353] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f79e0) 00:24:25.369 [2024-07-24 17:49:46.737360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.369 [2024-07-24 17:49:46.737373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12600d0, cid 7, qid 0 00:24:25.369 [2024-07-24 17:49:46.737534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.369 [2024-07-24 17:49:46.737544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.369 [2024-07-24 17:49:46.737547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737550] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12600d0) on tqpair=0x11f79e0 00:24:25.369 [2024-07-24 17:49:46.737580] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:25.369 [2024-07-24 17:49:46.737591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.369 [2024-07-24 17:49:46.737597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.369 [2024-07-24 17:49:46.737602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.369 [2024-07-24 17:49:46.737608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.369 [2024-07-24 17:49:46.737615] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.369 [2024-07-24 17:49:46.737628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.369 [2024-07-24 17:49:46.737641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.369 [2024-07-24 17:49:46.737792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.369 [2024-07-24 17:49:46.737802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.369 [2024-07-24 17:49:46.737805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.369 [2024-07-24 17:49:46.737808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.369 [2024-07-24 17:49:46.737816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.737819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.737822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.737828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.737845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.738002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.738012] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.738017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738021] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.738026] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:25.370 [2024-07-24 17:49:46.738030] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:25.370 [2024-07-24 17:49:46.738040] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.738059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.738071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.738214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.738223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.738227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.738241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.738255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.738267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.738415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.738424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.738427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.738442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.738455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.738467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.738615] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.738624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.738627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738630] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.738641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738648] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.738655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.738667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.738806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.738816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.738819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.738834] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.738841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.738847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.738860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.739007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.739017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.739020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.739023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.739034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.739038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.739041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f79e0) 00:24:25.370 [2024-07-24 17:49:46.743053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.370 [2024-07-24 17:49:46.743067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125fb50, cid 3, qid 0 00:24:25.370 [2024-07-24 17:49:46.743280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:25.370 [2024-07-24 17:49:46.743290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:25.370 [2024-07-24 17:49:46.743293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:25.370 [2024-07-24 17:49:46.743296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x125fb50) on tqpair=0x11f79e0 00:24:25.370 [2024-07-24 17:49:46.743306] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:25.370 0 Kelvin (-273 Celsius) 00:24:25.370 Available Spare: 0% 00:24:25.370 Available Spare Threshold: 0% 00:24:25.370 Life Percentage Used: 0% 00:24:25.370 Data Units Read: 0 00:24:25.370 Data Units Written: 0 00:24:25.370 Host Read Commands: 0 00:24:25.370 Host Write Commands: 0 00:24:25.370 Controller Busy Time: 0 minutes 00:24:25.370 Power Cycles: 0 00:24:25.370 Power On Hours: 0 hours 00:24:25.370 Unsafe Shutdowns: 0 00:24:25.370 Unrecoverable Media Errors: 0 00:24:25.370 Lifetime Error Log Entries: 0 00:24:25.370 Warning Temperature Time: 0 minutes 00:24:25.370 Critical Temperature Time: 0 minutes 00:24:25.370 00:24:25.370 Number of Queues 00:24:25.370 ================ 00:24:25.370 Number of I/O Submission Queues: 127 00:24:25.370 Number of I/O Completion Queues: 127 00:24:25.370 00:24:25.370 Active Namespaces 00:24:25.370 ================= 00:24:25.370 Namespace ID:1 00:24:25.370 Error Recovery Timeout: Unlimited 00:24:25.370 Command Set Identifier: NVM (00h) 00:24:25.370 Deallocate: Supported 00:24:25.370 Deallocated/Unwritten Error: Not Supported 00:24:25.370 Deallocated Read Value: Unknown 00:24:25.370 Deallocate in Write Zeroes: Not Supported 00:24:25.370 Deallocated Guard Field: 0xFFFF 00:24:25.370 Flush: Supported 00:24:25.370 Reservation: Supported 00:24:25.370 Namespace Sharing Capabilities: Multiple Controllers 00:24:25.370 Size (in LBAs): 131072 (0GiB) 00:24:25.370 Capacity (in LBAs): 131072 (0GiB) 00:24:25.370 Utilization (in LBAs): 131072 (0GiB) 00:24:25.370 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:25.370 EUI64: ABCDEF0123456789 00:24:25.370 UUID: 8bc1cb86-e6aa-4452-8769-cae1252adde4 00:24:25.370 Thin Provisioning: Not Supported 00:24:25.370 Per-NS Atomic Units: Yes 00:24:25.370 Atomic Boundary Size (Normal): 0 00:24:25.370 Atomic Boundary Size (PFail): 0 00:24:25.370 Atomic Boundary Offset: 0 00:24:25.370 Maximum Single Source Range Length: 65535 00:24:25.370 Maximum Copy Length: 65535 00:24:25.370 Maximum Source Range Count: 1 00:24:25.370 NGUID/EUI64 Never Reused: No 00:24:25.370 Namespace Write Protected: No 00:24:25.370 Number of LBA Formats: 1 00:24:25.370 Current LBA Format: LBA Format #00 00:24:25.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:25.370 00:24:25.370 17:49:46 -- host/identify.sh@51 -- # sync 00:24:25.370 17:49:46 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.370 17:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.370 17:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:25.370 17:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.370 17:49:46 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:25.370 17:49:46 -- host/identify.sh@56 -- # nvmftestfini 00:24:25.370 17:49:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:25.370 17:49:46 -- nvmf/common.sh@116 -- # sync 00:24:25.370 17:49:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:25.370 17:49:46 -- nvmf/common.sh@119 -- # set +e 00:24:25.370 17:49:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:25.370 17:49:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:25.370 rmmod nvme_tcp 00:24:25.370 rmmod nvme_fabrics 00:24:25.370 rmmod nvme_keyring 00:24:25.370 17:49:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:25.370 17:49:46 -- nvmf/common.sh@123 -- # set -e 00:24:25.370 17:49:46 -- nvmf/common.sh@124 -- # return 0 00:24:25.371 17:49:46 -- nvmf/common.sh@477 -- # '[' -n 717185 ']' 00:24:25.371 17:49:46 -- nvmf/common.sh@478 -- # killprocess 717185 00:24:25.371 17:49:46 -- common/autotest_common.sh@926 -- # '[' -z 717185 ']' 00:24:25.371 17:49:46 -- common/autotest_common.sh@930 -- # kill -0 717185 00:24:25.371 17:49:46 -- common/autotest_common.sh@931 -- # uname 00:24:25.371 17:49:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:25.371 17:49:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 717185 00:24:25.371 17:49:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:25.371 17:49:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:25.371 17:49:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 717185' 00:24:25.371 killing process with pid 717185 00:24:25.371 17:49:46 -- common/autotest_common.sh@945 -- # kill 717185 00:24:25.371 [2024-07-24 17:49:46.872580] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:25.371 17:49:46 -- common/autotest_common.sh@950 -- # wait 717185 00:24:25.628 17:49:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:25.628 17:49:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:25.628 17:49:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:25.628 17:49:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.628 17:49:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:25.628 17:49:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.628 17:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.628 17:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.160 17:49:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:28.160 00:24:28.160 real 0m8.902s 00:24:28.160 user 0m7.372s 00:24:28.160 sys 0m4.229s 00:24:28.160 17:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.160 17:49:49 -- common/autotest_common.sh@10 -- # set +x 00:24:28.160 ************************************ 00:24:28.160 END TEST nvmf_identify 00:24:28.160 ************************************ 00:24:28.160 17:49:49 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:28.160 17:49:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:28.160 17:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:28.160 17:49:49 -- common/autotest_common.sh@10 -- # set +x 00:24:28.160 ************************************ 00:24:28.160 START TEST nvmf_perf 00:24:28.160 ************************************ 00:24:28.160 17:49:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:28.160 * Looking for test storage... 00:24:28.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.160 17:49:49 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.160 17:49:49 -- nvmf/common.sh@7 -- # uname -s 00:24:28.160 17:49:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.160 17:49:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.160 17:49:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.160 17:49:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.160 17:49:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.160 17:49:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.160 17:49:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.160 17:49:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.160 17:49:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.160 17:49:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.160 17:49:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.160 17:49:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.160 17:49:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.160 17:49:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.160 17:49:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.160 17:49:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.160 17:49:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.160 17:49:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.160 17:49:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.160 17:49:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.160 17:49:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.160 17:49:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.160 17:49:49 -- paths/export.sh@5 -- # export PATH 00:24:28.160 17:49:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.160 17:49:49 -- nvmf/common.sh@46 -- # : 0 00:24:28.160 17:49:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:28.160 17:49:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:28.160 17:49:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:28.160 17:49:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.160 17:49:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.160 17:49:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:28.160 17:49:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:28.160 17:49:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:28.161 17:49:49 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:28.161 17:49:49 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:28.161 17:49:49 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:28.161 17:49:49 -- host/perf.sh@17 -- # nvmftestinit 00:24:28.161 17:49:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.161 17:49:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.161 17:49:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.161 17:49:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.161 17:49:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.161 17:49:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.161 17:49:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.161 17:49:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.161 17:49:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:28.161 17:49:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:28.161 17:49:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:28.161 17:49:49 -- common/autotest_common.sh@10 -- # set +x 00:24:33.431 17:49:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:33.431 17:49:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:33.431 17:49:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:33.431 17:49:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:33.431 17:49:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:33.431 17:49:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:33.431 17:49:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:33.431 17:49:54 -- nvmf/common.sh@294 -- # net_devs=() 00:24:33.431 17:49:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:33.431 17:49:54 -- nvmf/common.sh@295 -- # e810=() 00:24:33.432 17:49:54 -- nvmf/common.sh@295 -- # local -ga e810 00:24:33.432 17:49:54 -- nvmf/common.sh@296 -- # x722=() 00:24:33.432 17:49:54 -- nvmf/common.sh@296 -- # local -ga x722 00:24:33.432 17:49:54 -- nvmf/common.sh@297 -- # mlx=() 00:24:33.432 17:49:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:33.432 17:49:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.432 17:49:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:33.432 17:49:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:33.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:33.432 17:49:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:33.432 17:49:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:33.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:33.432 17:49:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:33.432 17:49:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.432 17:49:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.432 17:49:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:33.432 Found net devices under 0000:86:00.0: cvl_0_0 00:24:33.432 17:49:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:33.432 17:49:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.432 17:49:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.432 17:49:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:33.432 Found net devices under 0000:86:00.1: cvl_0_1 00:24:33.432 17:49:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:33.432 17:49:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:33.432 17:49:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.432 17:49:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.432 17:49:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:33.432 17:49:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.432 17:49:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.432 17:49:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:33.432 17:49:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.432 17:49:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.432 17:49:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:33.432 17:49:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:33.432 17:49:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.432 17:49:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.432 17:49:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.432 17:49:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.432 17:49:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:33.432 17:49:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.432 17:49:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.432 17:49:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.432 17:49:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:33.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:33.432 00:24:33.432 --- 10.0.0.2 ping statistics --- 00:24:33.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.432 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:33.432 17:49:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:33.432 00:24:33.432 --- 10.0.0.1 ping statistics --- 00:24:33.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.432 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:33.432 17:49:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.432 17:49:54 -- nvmf/common.sh@410 -- # return 0 00:24:33.432 17:49:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:33.432 17:49:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.432 17:49:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:33.432 17:49:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.432 17:49:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:33.432 17:49:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:33.432 17:49:54 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:33.432 17:49:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:33.432 17:49:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:33.432 17:49:54 -- common/autotest_common.sh@10 -- # set +x 00:24:33.432 17:49:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.432 17:49:54 -- nvmf/common.sh@469 -- # nvmfpid=720981 00:24:33.432 17:49:54 -- nvmf/common.sh@470 -- # waitforlisten 720981 00:24:33.432 17:49:54 -- common/autotest_common.sh@819 -- # '[' -z 720981 ']' 00:24:33.432 17:49:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.432 17:49:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:33.432 17:49:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.432 17:49:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:33.432 17:49:54 -- common/autotest_common.sh@10 -- # set +x 00:24:33.692 [2024-07-24 17:49:55.038553] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:33.692 [2024-07-24 17:49:55.038598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.692 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.692 [2024-07-24 17:49:55.096367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.692 [2024-07-24 17:49:55.169330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:33.692 [2024-07-24 17:49:55.169455] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.692 [2024-07-24 17:49:55.169466] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.692 [2024-07-24 17:49:55.169474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.692 [2024-07-24 17:49:55.169533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.692 [2024-07-24 17:49:55.169633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.692 [2024-07-24 17:49:55.169717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.692 [2024-07-24 17:49:55.169719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.258 17:49:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:34.258 17:49:55 -- common/autotest_common.sh@852 -- # return 0 00:24:34.258 17:49:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:34.258 17:49:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:34.258 17:49:55 -- common/autotest_common.sh@10 -- # set +x 00:24:34.516 17:49:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.516 17:49:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:34.516 17:49:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:37.805 17:49:58 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:37.805 17:49:58 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:37.805 17:49:59 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:37.805 17:49:59 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:37.805 17:49:59 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:37.805 17:49:59 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:37.805 17:49:59 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:37.805 17:49:59 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:37.806 17:49:59 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.063 [2024-07-24 17:49:59.420197] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.063 17:49:59 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.063 17:49:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:38.063 17:49:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.322 17:49:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:38.322 17:49:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:38.580 17:49:59 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.580 [2024-07-24 17:50:00.143041] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.580 17:50:00 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.838 17:50:00 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:38.838 17:50:00 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:38.838 17:50:00 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:38.838 17:50:00 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:40.210 Initializing NVMe Controllers 00:24:40.210 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:40.210 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:40.210 Initialization complete. Launching workers. 00:24:40.210 ======================================================== 00:24:40.210 Latency(us) 00:24:40.210 Device Information : IOPS MiB/s Average min max 00:24:40.210 PCIE (0000:5e:00.0) NSID 1 from core 0: 99250.10 387.70 321.81 9.45 4407.87 00:24:40.210 ======================================================== 00:24:40.210 Total : 99250.10 387.70 321.81 9.45 4407.87 00:24:40.210 00:24:40.210 17:50:01 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.585 Initializing NVMe Controllers 00:24:41.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.585 Initialization complete. Launching workers. 00:24:41.585 ======================================================== 00:24:41.585 Latency(us) 00:24:41.585 Device Information : IOPS MiB/s Average min max 00:24:41.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13411.80 549.80 45517.75 00:24:41.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16467.28 7814.87 47885.05 00:24:41.585 ======================================================== 00:24:41.585 Total : 136.00 0.53 14782.27 549.80 47885.05 00:24:41.585 00:24:41.585 17:50:02 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.585 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.519 Initializing NVMe Controllers 00:24:42.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:42.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:42.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:42.519 Initialization complete. Launching workers. 00:24:42.519 ======================================================== 00:24:42.519 Latency(us) 00:24:42.519 Device Information : IOPS MiB/s Average min max 00:24:42.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7355.00 28.73 4354.55 742.02 12473.45 00:24:42.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3911.00 15.28 8227.08 6436.85 16046.30 00:24:42.519 ======================================================== 00:24:42.519 Total : 11266.00 44.01 5698.90 742.02 16046.30 00:24:42.519 00:24:42.519 17:50:03 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:42.519 17:50:03 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:42.520 17:50:03 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:42.520 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.804 Initializing NVMe Controllers 00:24:45.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.804 Controller IO queue size 128, less than required. 00:24:45.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.804 Controller IO queue size 128, less than required. 00:24:45.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.804 Initialization complete. Launching workers. 00:24:45.804 ======================================================== 00:24:45.804 Latency(us) 00:24:45.804 Device Information : IOPS MiB/s Average min max 00:24:45.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 800.06 200.02 165923.44 98628.10 262325.02 00:24:45.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.31 146.83 230734.03 79834.55 367776.87 00:24:45.804 ======================================================== 00:24:45.804 Total : 1387.37 346.84 193359.46 79834.55 367776.87 00:24:45.804 00:24:45.804 17:50:06 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:45.804 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.804 No valid NVMe controllers or AIO or URING devices found 00:24:45.804 Initializing NVMe Controllers 00:24:45.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.804 Controller IO queue size 128, less than required. 00:24:45.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.804 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:45.804 Controller IO queue size 128, less than required. 00:24:45.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.804 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:45.804 WARNING: Some requested NVMe devices were skipped 00:24:45.804 17:50:06 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:45.804 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.338 Initializing NVMe Controllers 00:24:48.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.338 Controller IO queue size 128, less than required. 00:24:48.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.338 Controller IO queue size 128, less than required. 00:24:48.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.338 Initialization complete. Launching workers. 00:24:48.338 00:24:48.338 ==================== 00:24:48.338 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:48.338 TCP transport: 00:24:48.338 polls: 56476 00:24:48.338 idle_polls: 19285 00:24:48.338 sock_completions: 37191 00:24:48.338 nvme_completions: 2292 00:24:48.338 submitted_requests: 3624 00:24:48.338 queued_requests: 1 00:24:48.338 00:24:48.338 ==================== 00:24:48.338 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:48.338 TCP transport: 00:24:48.338 polls: 60154 00:24:48.338 idle_polls: 21442 00:24:48.338 sock_completions: 38712 00:24:48.338 nvme_completions: 2957 00:24:48.338 submitted_requests: 4571 00:24:48.338 queued_requests: 1 00:24:48.338 ======================================================== 00:24:48.338 Latency(us) 00:24:48.338 Device Information : IOPS MiB/s Average min max 00:24:48.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 636.46 159.12 213905.68 99506.42 386659.03 00:24:48.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 802.95 200.74 165350.56 94795.54 255571.49 00:24:48.338 ======================================================== 00:24:48.338 Total : 1439.42 359.85 186820.06 94795.54 386659.03 00:24:48.338 00:24:48.338 17:50:09 -- host/perf.sh@66 -- # sync 00:24:48.338 17:50:09 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.338 17:50:09 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:48.338 17:50:09 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:24:48.338 17:50:09 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:51.621 17:50:12 -- host/perf.sh@72 -- # ls_guid=e24cbd45-340a-4f65-a295-14e2d4f04112 00:24:51.621 17:50:12 -- host/perf.sh@73 -- # get_lvs_free_mb e24cbd45-340a-4f65-a295-14e2d4f04112 00:24:51.621 17:50:12 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e24cbd45-340a-4f65-a295-14e2d4f04112 00:24:51.621 17:50:12 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:51.621 17:50:12 -- common/autotest_common.sh@1345 -- # local fc 00:24:51.621 17:50:12 -- common/autotest_common.sh@1346 -- # local cs 00:24:51.621 17:50:12 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:51.621 17:50:13 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:51.621 { 00:24:51.621 "uuid": "e24cbd45-340a-4f65-a295-14e2d4f04112", 00:24:51.621 "name": "lvs_0", 00:24:51.621 "base_bdev": "Nvme0n1", 00:24:51.621 "total_data_clusters": 238234, 00:24:51.621 "free_clusters": 238234, 00:24:51.621 "block_size": 512, 00:24:51.621 "cluster_size": 4194304 00:24:51.621 } 00:24:51.621 ]' 00:24:51.621 17:50:13 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e24cbd45-340a-4f65-a295-14e2d4f04112") .free_clusters' 00:24:51.621 17:50:13 -- common/autotest_common.sh@1348 -- # fc=238234 00:24:51.621 17:50:13 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e24cbd45-340a-4f65-a295-14e2d4f04112") .cluster_size' 00:24:51.621 17:50:13 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:51.621 17:50:13 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:24:51.621 17:50:13 -- common/autotest_common.sh@1353 -- # echo 952936 00:24:51.621 952936 00:24:51.621 17:50:13 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:24:51.621 17:50:13 -- host/perf.sh@78 -- # free_mb=20480 00:24:51.621 17:50:13 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e24cbd45-340a-4f65-a295-14e2d4f04112 lbd_0 20480 00:24:52.188 17:50:13 -- host/perf.sh@80 -- # lb_guid=66d76f7d-bc0c-49a1-8bab-fa0b266a6bc2 00:24:52.188 17:50:13 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 66d76f7d-bc0c-49a1-8bab-fa0b266a6bc2 lvs_n_0 00:24:52.752 17:50:14 -- host/perf.sh@83 -- # ls_nested_guid=4413f207-c457-4d08-b65f-8c0c9d2cbb49 00:24:52.752 17:50:14 -- host/perf.sh@84 -- # get_lvs_free_mb 4413f207-c457-4d08-b65f-8c0c9d2cbb49 00:24:52.752 17:50:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4413f207-c457-4d08-b65f-8c0c9d2cbb49 00:24:52.752 17:50:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:52.752 17:50:14 -- common/autotest_common.sh@1345 -- # local fc 00:24:52.752 17:50:14 -- common/autotest_common.sh@1346 -- # local cs 00:24:52.752 17:50:14 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:53.010 17:50:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:53.010 { 00:24:53.010 "uuid": "e24cbd45-340a-4f65-a295-14e2d4f04112", 00:24:53.010 "name": "lvs_0", 00:24:53.010 "base_bdev": "Nvme0n1", 00:24:53.010 "total_data_clusters": 238234, 00:24:53.010 "free_clusters": 233114, 00:24:53.010 "block_size": 512, 00:24:53.010 "cluster_size": 4194304 00:24:53.010 }, 00:24:53.010 { 00:24:53.010 "uuid": "4413f207-c457-4d08-b65f-8c0c9d2cbb49", 00:24:53.010 "name": "lvs_n_0", 00:24:53.010 "base_bdev": "66d76f7d-bc0c-49a1-8bab-fa0b266a6bc2", 00:24:53.010 "total_data_clusters": 5114, 00:24:53.010 "free_clusters": 5114, 00:24:53.010 "block_size": 512, 00:24:53.010 "cluster_size": 4194304 00:24:53.010 } 00:24:53.010 ]' 00:24:53.010 17:50:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4413f207-c457-4d08-b65f-8c0c9d2cbb49") .free_clusters' 00:24:53.010 17:50:14 -- common/autotest_common.sh@1348 -- # fc=5114 00:24:53.010 17:50:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4413f207-c457-4d08-b65f-8c0c9d2cbb49") .cluster_size' 00:24:53.010 17:50:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:53.010 17:50:14 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:24:53.010 17:50:14 -- common/autotest_common.sh@1353 -- # echo 20456 00:24:53.010 20456 00:24:53.010 17:50:14 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:24:53.010 17:50:14 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4413f207-c457-4d08-b65f-8c0c9d2cbb49 lbd_nest_0 20456 00:24:53.267 17:50:14 -- host/perf.sh@88 -- # lb_nested_guid=204cfa14-529c-4377-b4c9-51432c255798 00:24:53.267 17:50:14 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.267 17:50:14 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:53.267 17:50:14 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 204cfa14-529c-4377-b4c9-51432c255798 00:24:53.524 17:50:15 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.782 17:50:15 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:53.782 17:50:15 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:53.782 17:50:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:53.782 17:50:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:53.782 17:50:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:53.782 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.995 Initializing NVMe Controllers 00:25:05.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.995 Initialization complete. Launching workers. 00:25:05.995 ======================================================== 00:25:05.995 Latency(us) 00:25:05.995 Device Information : IOPS MiB/s Average min max 00:25:05.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.10 0.02 22727.11 373.72 47880.61 00:25:05.995 ======================================================== 00:25:05.995 Total : 44.10 0.02 22727.11 373.72 47880.61 00:25:05.995 00:25:05.995 17:50:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:05.995 17:50:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.995 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.965 Initializing NVMe Controllers 00:25:15.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.965 Initialization complete. Launching workers. 00:25:15.965 ======================================================== 00:25:15.965 Latency(us) 00:25:15.966 Device Information : IOPS MiB/s Average min max 00:25:15.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.89 10.36 12064.22 5033.87 18952.68 00:25:15.966 ======================================================== 00:25:15.966 Total : 82.89 10.36 12064.22 5033.87 18952.68 00:25:15.966 00:25:15.966 17:50:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:15.966 17:50:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:15.966 17:50:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.966 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.934 Initializing NVMe Controllers 00:25:25.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.934 Initialization complete. Launching workers. 00:25:25.934 ======================================================== 00:25:25.934 Latency(us) 00:25:25.934 Device Information : IOPS MiB/s Average min max 00:25:25.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6690.45 3.27 4783.58 539.80 12044.08 00:25:25.935 ======================================================== 00:25:25.935 Total : 6690.45 3.27 4783.58 539.80 12044.08 00:25:25.935 00:25:25.935 17:50:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:25.935 17:50:46 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.935 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.908 Initializing NVMe Controllers 00:25:35.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.908 Initialization complete. Launching workers. 00:25:35.908 ======================================================== 00:25:35.908 Latency(us) 00:25:35.908 Device Information : IOPS MiB/s Average min max 00:25:35.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1495.60 186.95 21432.27 1551.76 71590.97 00:25:35.908 ======================================================== 00:25:35.908 Total : 1495.60 186.95 21432.27 1551.76 71590.97 00:25:35.908 00:25:35.908 17:50:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:35.908 17:50:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:35.908 17:50:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.908 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.880 Initializing NVMe Controllers 00:25:45.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.880 Controller IO queue size 128, less than required. 00:25:45.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:45.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:45.880 Initialization complete. Launching workers. 00:25:45.880 ======================================================== 00:25:45.880 Latency(us) 00:25:45.880 Device Information : IOPS MiB/s Average min max 00:25:45.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14488.48 7.07 8835.36 1404.78 22986.61 00:25:45.880 ======================================================== 00:25:45.880 Total : 14488.48 7.07 8835.36 1404.78 22986.61 00:25:45.880 00:25:45.880 17:51:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:45.880 17:51:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.880 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.863 Initializing NVMe Controllers 00:25:55.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.863 Controller IO queue size 128, less than required. 00:25:55.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:55.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.863 Initialization complete. Launching workers. 00:25:55.863 ======================================================== 00:25:55.863 Latency(us) 00:25:55.863 Device Information : IOPS MiB/s Average min max 00:25:55.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1118.40 139.80 114864.30 16098.92 223838.09 00:25:55.863 ======================================================== 00:25:55.863 Total : 1118.40 139.80 114864.30 16098.92 223838.09 00:25:55.863 00:25:55.864 17:51:17 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.124 17:51:17 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 204cfa14-529c-4377-b4c9-51432c255798 00:25:56.690 17:51:18 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:56.949 17:51:18 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66d76f7d-bc0c-49a1-8bab-fa0b266a6bc2 00:25:57.207 17:51:18 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:57.207 17:51:18 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:57.207 17:51:18 -- host/perf.sh@114 -- # nvmftestfini 00:25:57.207 17:51:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:57.207 17:51:18 -- nvmf/common.sh@116 -- # sync 00:25:57.207 17:51:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:57.207 17:51:18 -- nvmf/common.sh@119 -- # set +e 00:25:57.207 17:51:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:57.207 17:51:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:57.207 rmmod nvme_tcp 00:25:57.207 rmmod nvme_fabrics 00:25:57.207 rmmod nvme_keyring 00:25:57.464 17:51:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:57.464 17:51:18 -- nvmf/common.sh@123 -- # set -e 00:25:57.465 17:51:18 -- nvmf/common.sh@124 -- # return 0 00:25:57.465 17:51:18 -- nvmf/common.sh@477 -- # '[' -n 720981 ']' 00:25:57.465 17:51:18 -- nvmf/common.sh@478 -- # killprocess 720981 00:25:57.465 17:51:18 -- common/autotest_common.sh@926 -- # '[' -z 720981 ']' 00:25:57.465 17:51:18 -- common/autotest_common.sh@930 -- # kill -0 720981 00:25:57.465 17:51:18 -- common/autotest_common.sh@931 -- # uname 00:25:57.465 17:51:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:57.465 17:51:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 720981 00:25:57.465 17:51:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:57.465 17:51:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:57.465 17:51:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 720981' 00:25:57.465 killing process with pid 720981 00:25:57.465 17:51:18 -- common/autotest_common.sh@945 -- # kill 720981 00:25:57.465 17:51:18 -- common/autotest_common.sh@950 -- # wait 720981 00:25:58.841 17:51:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:58.841 17:51:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:58.841 17:51:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:58.841 17:51:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.841 17:51:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:58.841 17:51:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.841 17:51:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.841 17:51:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.377 17:51:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:01.377 00:26:01.377 real 1m33.239s 00:26:01.377 user 5m36.311s 00:26:01.377 sys 0m13.451s 00:26:01.377 17:51:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:01.377 17:51:22 -- common/autotest_common.sh@10 -- # set +x 00:26:01.377 ************************************ 00:26:01.377 END TEST nvmf_perf 00:26:01.377 ************************************ 00:26:01.377 17:51:22 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:01.377 17:51:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:01.377 17:51:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:01.377 17:51:22 -- common/autotest_common.sh@10 -- # set +x 00:26:01.377 ************************************ 00:26:01.377 START TEST nvmf_fio_host 00:26:01.377 ************************************ 00:26:01.377 17:51:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:01.377 * Looking for test storage... 00:26:01.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.377 17:51:22 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.377 17:51:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.377 17:51:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.377 17:51:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.377 17:51:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.377 17:51:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.377 17:51:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.377 17:51:22 -- paths/export.sh@5 -- # export PATH 00:26:01.377 17:51:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.377 17:51:22 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.377 17:51:22 -- nvmf/common.sh@7 -- # uname -s 00:26:01.377 17:51:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.377 17:51:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.377 17:51:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.377 17:51:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.377 17:51:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.377 17:51:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.377 17:51:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.377 17:51:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.377 17:51:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.377 17:51:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.377 17:51:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.377 17:51:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.377 17:51:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.377 17:51:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.377 17:51:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.377 17:51:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.377 17:51:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.377 17:51:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.377 17:51:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.377 17:51:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.377 17:51:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.378 17:51:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.378 17:51:22 -- paths/export.sh@5 -- # export PATH 00:26:01.378 17:51:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.378 17:51:22 -- nvmf/common.sh@46 -- # : 0 00:26:01.378 17:51:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:01.378 17:51:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:01.378 17:51:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:01.378 17:51:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.378 17:51:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.378 17:51:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:01.378 17:51:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:01.378 17:51:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:01.378 17:51:22 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:01.378 17:51:22 -- host/fio.sh@14 -- # nvmftestinit 00:26:01.378 17:51:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:01.378 17:51:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.378 17:51:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:01.378 17:51:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:01.378 17:51:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:01.378 17:51:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.378 17:51:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.378 17:51:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.378 17:51:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:01.378 17:51:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:01.378 17:51:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:01.378 17:51:22 -- common/autotest_common.sh@10 -- # set +x 00:26:06.644 17:51:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:06.644 17:51:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:06.644 17:51:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:06.644 17:51:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:06.644 17:51:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:06.644 17:51:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:06.644 17:51:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:06.644 17:51:27 -- nvmf/common.sh@294 -- # net_devs=() 00:26:06.644 17:51:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:06.644 17:51:27 -- nvmf/common.sh@295 -- # e810=() 00:26:06.644 17:51:27 -- nvmf/common.sh@295 -- # local -ga e810 00:26:06.644 17:51:27 -- nvmf/common.sh@296 -- # x722=() 00:26:06.644 17:51:27 -- nvmf/common.sh@296 -- # local -ga x722 00:26:06.644 17:51:27 -- nvmf/common.sh@297 -- # mlx=() 00:26:06.644 17:51:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:06.644 17:51:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.644 17:51:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:06.644 17:51:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:06.644 17:51:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:06.644 17:51:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:06.644 17:51:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:06.644 17:51:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:06.644 17:51:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:06.644 17:51:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:06.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:06.645 17:51:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:06.645 17:51:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:06.645 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:06.645 17:51:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:06.645 17:51:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:06.645 17:51:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.645 17:51:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:06.645 17:51:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.645 17:51:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:06.645 Found net devices under 0000:86:00.0: cvl_0_0 00:26:06.645 17:51:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.645 17:51:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:06.645 17:51:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.645 17:51:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:06.645 17:51:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.645 17:51:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:06.645 Found net devices under 0000:86:00.1: cvl_0_1 00:26:06.645 17:51:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.645 17:51:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:06.645 17:51:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:06.645 17:51:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:06.645 17:51:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.645 17:51:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.645 17:51:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.645 17:51:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:06.645 17:51:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.645 17:51:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.645 17:51:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:06.645 17:51:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.645 17:51:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.645 17:51:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:06.645 17:51:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:06.645 17:51:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.645 17:51:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.645 17:51:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.645 17:51:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.645 17:51:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:06.645 17:51:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.645 17:51:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.645 17:51:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.645 17:51:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:06.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:26:06.645 00:26:06.645 --- 10.0.0.2 ping statistics --- 00:26:06.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.645 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:26:06.645 17:51:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:06.645 00:26:06.645 --- 10.0.0.1 ping statistics --- 00:26:06.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.645 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:06.645 17:51:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.645 17:51:27 -- nvmf/common.sh@410 -- # return 0 00:26:06.645 17:51:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:06.645 17:51:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.645 17:51:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:06.645 17:51:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.645 17:51:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:06.645 17:51:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:06.645 17:51:27 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:06.645 17:51:27 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:06.645 17:51:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:06.645 17:51:27 -- common/autotest_common.sh@10 -- # set +x 00:26:06.645 17:51:27 -- host/fio.sh@24 -- # nvmfpid=738849 00:26:06.645 17:51:27 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:06.645 17:51:27 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.645 17:51:27 -- host/fio.sh@28 -- # waitforlisten 738849 00:26:06.645 17:51:27 -- common/autotest_common.sh@819 -- # '[' -z 738849 ']' 00:26:06.645 17:51:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.645 17:51:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.645 17:51:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.645 17:51:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.645 17:51:27 -- common/autotest_common.sh@10 -- # set +x 00:26:06.645 [2024-07-24 17:51:27.894201] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:06.645 [2024-07-24 17:51:27.894247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.645 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.645 [2024-07-24 17:51:27.952684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.645 [2024-07-24 17:51:28.024858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:06.645 [2024-07-24 17:51:28.024984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.645 [2024-07-24 17:51:28.024996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.645 [2024-07-24 17:51:28.025003] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.645 [2024-07-24 17:51:28.025108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.645 [2024-07-24 17:51:28.025140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.645 [2024-07-24 17:51:28.025223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.645 [2024-07-24 17:51:28.025226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.211 17:51:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:07.211 17:51:28 -- common/autotest_common.sh@852 -- # return 0 00:26:07.211 17:51:28 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:07.468 [2024-07-24 17:51:28.858868] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.468 17:51:28 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:07.468 17:51:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:07.468 17:51:28 -- common/autotest_common.sh@10 -- # set +x 00:26:07.468 17:51:28 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:07.725 Malloc1 00:26:07.725 17:51:29 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.725 17:51:29 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:07.982 17:51:29 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.239 [2024-07-24 17:51:29.592965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.239 17:51:29 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.239 17:51:29 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:08.239 17:51:29 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:08.239 17:51:29 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:08.239 17:51:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:08.239 17:51:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.239 17:51:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:08.239 17:51:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.239 17:51:29 -- common/autotest_common.sh@1320 -- # shift 00:26:08.239 17:51:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:08.239 17:51:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:08.239 17:51:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:08.239 17:51:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:08.239 17:51:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:08.497 17:51:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:08.497 17:51:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:08.497 17:51:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:08.497 17:51:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:08.754 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:08.754 fio-3.35 00:26:08.754 Starting 1 thread 00:26:08.754 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.274 00:26:11.274 test: (groupid=0, jobs=1): err= 0: pid=739458: Wed Jul 24 17:51:32 2024 00:26:11.274 read: IOPS=3985, BW=15.6MiB/s (16.3MB/s)(31.2MiB/2002msec) 00:26:11.274 slat (nsec): min=1591, max=5225.3k, avg=13295.99, stdev=89744.80 00:26:11.274 clat (usec): min=1535, max=24986, avg=16111.25, stdev=1837.51 00:26:11.274 lat (usec): min=1536, max=24994, avg=16124.55, stdev=1839.16 00:26:11.274 clat percentiles (usec): 00:26:11.274 | 1.00th=[11469], 5.00th=[12911], 10.00th=[13960], 20.00th=[14877], 00:26:11.274 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:26:11.274 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18744], 00:26:11.274 | 99.00th=[20317], 99.50th=[21627], 99.90th=[24249], 99.95th=[24773], 00:26:11.274 | 99.99th=[25035] 00:26:11.274 bw ( KiB/s): min=14842, max=16648, per=99.28%, avg=15826.50, stdev=751.05, samples=4 00:26:11.274 iops : min= 3710, max= 4162, avg=3956.50, stdev=187.98, samples=4 00:26:11.274 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(31.3MiB/2002msec); 0 zone resets 00:26:11.274 slat (nsec): min=1637, max=920631, avg=13368.23, stdev=64794.88 00:26:11.274 clat (usec): min=1586, max=23744, avg=15676.55, stdev=1914.21 00:26:11.275 lat (usec): min=1589, max=23748, avg=15689.92, stdev=1914.24 00:26:11.275 clat percentiles (usec): 00:26:11.275 | 1.00th=[10814], 5.00th=[12387], 10.00th=[13304], 20.00th=[14353], 00:26:11.275 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:26:11.275 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18482], 00:26:11.275 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22676], 99.95th=[22938], 00:26:11.275 | 99.99th=[23725] 00:26:11.275 bw ( KiB/s): min=15368, max=16480, per=99.23%, avg=15898.00, stdev=545.23, samples=4 00:26:11.275 iops : min= 3842, max= 4120, avg=3974.50, stdev=136.31, samples=4 00:26:11.275 lat (msec) : 2=0.02%, 4=0.10%, 10=0.26%, 20=98.42%, 50=1.20% 00:26:11.275 cpu : usr=2.60%, sys=14.69%, ctx=245, majf=0, minf=4 00:26:11.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:11.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:11.275 issued rwts: total=7978,8019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:11.275 00:26:11.275 Run status group 0 (all jobs): 00:26:11.275 READ: bw=15.6MiB/s (16.3MB/s), 15.6MiB/s-15.6MiB/s (16.3MB/s-16.3MB/s), io=31.2MiB (32.7MB), run=2002-2002msec 00:26:11.275 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=31.3MiB (32.8MB), run=2002-2002msec 00:26:11.275 17:51:32 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:11.275 17:51:32 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:11.275 17:51:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:11.275 17:51:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.275 17:51:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:11.275 17:51:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.275 17:51:32 -- common/autotest_common.sh@1320 -- # shift 00:26:11.275 17:51:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:11.275 17:51:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:11.275 17:51:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:11.275 17:51:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:11.275 17:51:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:11.275 17:51:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:11.275 17:51:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:11.275 17:51:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:11.275 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:11.275 fio-3.35 00:26:11.275 Starting 1 thread 00:26:11.275 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.861 00:26:13.861 test: (groupid=0, jobs=1): err= 0: pid=740004: Wed Jul 24 17:51:35 2024 00:26:13.861 read: IOPS=9222, BW=144MiB/s (151MB/s)(289MiB/2005msec) 00:26:13.861 slat (nsec): min=2553, max=89934, avg=2858.20, stdev=1443.27 00:26:13.861 clat (usec): min=2832, max=44716, avg=8657.08, stdev=3775.60 00:26:13.861 lat (usec): min=2834, max=44718, avg=8659.94, stdev=3776.19 00:26:13.861 clat percentiles (usec): 00:26:13.861 | 1.00th=[ 4015], 5.00th=[ 4948], 10.00th=[ 5604], 20.00th=[ 6325], 00:26:13.861 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8455], 00:26:13.861 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[14877], 00:26:13.861 | 99.00th=[26608], 99.50th=[27395], 99.90th=[29754], 99.95th=[30278], 00:26:13.861 | 99.99th=[39060] 00:26:13.861 bw ( KiB/s): min=66016, max=80544, per=49.28%, avg=72712.00, stdev=7233.76, samples=4 00:26:13.861 iops : min= 4126, max= 5034, avg=4544.50, stdev=452.11, samples=4 00:26:13.861 write: IOPS=5424, BW=84.8MiB/s (88.9MB/s)(149MiB/1758msec); 0 zone resets 00:26:13.861 slat (usec): min=29, max=351, avg=31.75, stdev= 6.62 00:26:13.861 clat (usec): min=3678, max=36671, avg=9313.23, stdev=3569.38 00:26:13.861 lat (usec): min=3709, max=36703, avg=9344.98, stdev=3572.17 00:26:13.861 clat percentiles (usec): 00:26:13.861 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7504], 00:26:13.861 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:26:13.861 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[13173], 00:26:13.861 | 99.00th=[27919], 99.50th=[30278], 99.90th=[33817], 99.95th=[34866], 00:26:13.861 | 99.99th=[36439] 00:26:13.861 bw ( KiB/s): min=69408, max=83968, per=87.43%, avg=75880.00, stdev=7478.76, samples=4 00:26:13.861 iops : min= 4338, max= 5248, avg=4742.50, stdev=467.42, samples=4 00:26:13.861 lat (msec) : 4=0.68%, 10=79.50%, 20=16.91%, 50=2.92% 00:26:13.861 cpu : usr=84.08%, sys=11.63%, ctx=18, majf=0, minf=1 00:26:13.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:13.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:13.861 issued rwts: total=18491,9536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:13.861 00:26:13.861 Run status group 0 (all jobs): 00:26:13.861 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2005-2005msec 00:26:13.861 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=149MiB (156MB), run=1758-1758msec 00:26:13.861 17:51:35 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.861 17:51:35 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:13.861 17:51:35 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:13.861 17:51:35 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:13.861 17:51:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:13.861 17:51:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:13.861 17:51:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:13.861 17:51:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:13.861 17:51:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:14.118 17:51:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:26:14.118 17:51:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:26:14.118 17:51:35 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:26:17.401 Nvme0n1 00:26:17.401 17:51:38 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:19.931 17:51:41 -- host/fio.sh@53 -- # ls_guid=ce1c08cb-389d-41a9-b00d-8ace65e45c47 00:26:19.931 17:51:41 -- host/fio.sh@54 -- # get_lvs_free_mb ce1c08cb-389d-41a9-b00d-8ace65e45c47 00:26:19.931 17:51:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ce1c08cb-389d-41a9-b00d-8ace65e45c47 00:26:19.931 17:51:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:19.931 17:51:41 -- common/autotest_common.sh@1345 -- # local fc 00:26:19.931 17:51:41 -- common/autotest_common.sh@1346 -- # local cs 00:26:19.931 17:51:41 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:20.189 17:51:41 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:20.189 { 00:26:20.189 "uuid": "ce1c08cb-389d-41a9-b00d-8ace65e45c47", 00:26:20.189 "name": "lvs_0", 00:26:20.189 "base_bdev": "Nvme0n1", 00:26:20.189 "total_data_clusters": 930, 00:26:20.189 "free_clusters": 930, 00:26:20.189 "block_size": 512, 00:26:20.189 "cluster_size": 1073741824 00:26:20.189 } 00:26:20.189 ]' 00:26:20.189 17:51:41 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ce1c08cb-389d-41a9-b00d-8ace65e45c47") .free_clusters' 00:26:20.189 17:51:41 -- common/autotest_common.sh@1348 -- # fc=930 00:26:20.189 17:51:41 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ce1c08cb-389d-41a9-b00d-8ace65e45c47") .cluster_size' 00:26:20.189 17:51:41 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:26:20.189 17:51:41 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:26:20.189 17:51:41 -- common/autotest_common.sh@1353 -- # echo 952320 00:26:20.189 952320 00:26:20.189 17:51:41 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:26:20.446 1dbc472c-2659-44b0-939c-60bcc3d733ec 00:26:20.446 17:51:41 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:20.705 17:51:42 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:20.963 17:51:42 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:20.963 17:51:42 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:20.963 17:51:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:20.963 17:51:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:20.963 17:51:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.963 17:51:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:20.963 17:51:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.963 17:51:42 -- common/autotest_common.sh@1320 -- # shift 00:26:20.963 17:51:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:20.963 17:51:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:20.963 17:51:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:20.963 17:51:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:20.963 17:51:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:20.964 17:51:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:20.964 17:51:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:20.964 17:51:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:21.222 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:21.222 fio-3.35 00:26:21.222 Starting 1 thread 00:26:21.480 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.011 00:26:24.011 test: (groupid=0, jobs=1): err= 0: pid=741733: Wed Jul 24 17:51:45 2024 00:26:24.011 read: IOPS=8077, BW=31.6MiB/s (33.1MB/s)(63.3MiB/2005msec) 00:26:24.011 slat (nsec): min=1559, max=112457, avg=1759.99, stdev=1254.51 00:26:24.011 clat (msec): min=2, max=179, avg= 9.05, stdev=10.65 00:26:24.011 lat (msec): min=2, max=179, avg= 9.06, stdev=10.65 00:26:24.011 clat percentiles (msec): 00:26:24.011 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:26:24.011 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:26:24.011 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 12], 00:26:24.011 | 99.00th=[ 16], 99.50th=[ 17], 99.90th=[ 176], 99.95th=[ 176], 00:26:24.011 | 99.99th=[ 178] 00:26:24.011 bw ( KiB/s): min=22576, max=35888, per=99.87%, avg=32270.00, stdev=6470.94, samples=4 00:26:24.011 iops : min= 5644, max= 8972, avg=8067.50, stdev=1617.74, samples=4 00:26:24.011 write: IOPS=8068, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2005msec); 0 zone resets 00:26:24.011 slat (nsec): min=1631, max=83871, avg=1846.09, stdev=863.87 00:26:24.011 clat (usec): min=505, max=172817, avg=6727.06, stdev=9744.41 00:26:24.011 lat (usec): min=506, max=172822, avg=6728.91, stdev=9744.60 00:26:24.011 clat percentiles (msec): 00:26:24.011 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:26:24.011 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:26:24.011 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:26:24.011 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 171], 99.95th=[ 171], 00:26:24.011 | 99.99th=[ 174] 00:26:24.011 bw ( KiB/s): min=23552, max=35728, per=99.84%, avg=32222.00, stdev=5797.65, samples=4 00:26:24.011 iops : min= 5888, max= 8932, avg=8055.50, stdev=1449.41, samples=4 00:26:24.011 lat (usec) : 750=0.01% 00:26:24.011 lat (msec) : 2=0.01%, 4=0.86%, 10=92.09%, 20=6.65%, 250=0.40% 00:26:24.011 cpu : usr=64.97%, sys=28.64%, ctx=48, majf=0, minf=4 00:26:24.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:24.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.011 issued rwts: total=16196,16177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.011 00:26:24.011 Run status group 0 (all jobs): 00:26:24.011 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.3MB), run=2005-2005msec 00:26:24.011 WRITE: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2005-2005msec 00:26:24.011 17:51:45 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:24.011 17:51:45 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:24.946 17:51:46 -- host/fio.sh@64 -- # ls_nested_guid=9a2b7edd-64a5-4635-9518-540383fdfdb9 00:26:24.946 17:51:46 -- host/fio.sh@65 -- # get_lvs_free_mb 9a2b7edd-64a5-4635-9518-540383fdfdb9 00:26:24.946 17:51:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=9a2b7edd-64a5-4635-9518-540383fdfdb9 00:26:24.946 17:51:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:24.946 17:51:46 -- common/autotest_common.sh@1345 -- # local fc 00:26:24.946 17:51:46 -- common/autotest_common.sh@1346 -- # local cs 00:26:24.946 17:51:46 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:24.946 17:51:46 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:24.946 { 00:26:24.946 "uuid": "ce1c08cb-389d-41a9-b00d-8ace65e45c47", 00:26:24.946 "name": "lvs_0", 00:26:24.946 "base_bdev": "Nvme0n1", 00:26:24.946 "total_data_clusters": 930, 00:26:24.946 "free_clusters": 0, 00:26:24.946 "block_size": 512, 00:26:24.946 "cluster_size": 1073741824 00:26:24.946 }, 00:26:24.946 { 00:26:24.946 "uuid": "9a2b7edd-64a5-4635-9518-540383fdfdb9", 00:26:24.946 "name": "lvs_n_0", 00:26:24.946 "base_bdev": "1dbc472c-2659-44b0-939c-60bcc3d733ec", 00:26:24.946 "total_data_clusters": 237847, 00:26:24.946 "free_clusters": 237847, 00:26:24.946 "block_size": 512, 00:26:24.946 "cluster_size": 4194304 00:26:24.946 } 00:26:24.946 ]' 00:26:24.946 17:51:46 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="9a2b7edd-64a5-4635-9518-540383fdfdb9") .free_clusters' 00:26:25.205 17:51:46 -- common/autotest_common.sh@1348 -- # fc=237847 00:26:25.205 17:51:46 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="9a2b7edd-64a5-4635-9518-540383fdfdb9") .cluster_size' 00:26:25.205 17:51:46 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:25.205 17:51:46 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:26:25.205 17:51:46 -- common/autotest_common.sh@1353 -- # echo 951388 00:26:25.205 951388 00:26:25.205 17:51:46 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:26:25.772 3c29f47b-caf8-44e4-ac11-96f9f5ad4efb 00:26:25.772 17:51:47 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:25.772 17:51:47 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:26.030 17:51:47 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:26.289 17:51:47 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.289 17:51:47 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.289 17:51:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:26.289 17:51:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.289 17:51:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:26.289 17:51:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.289 17:51:47 -- common/autotest_common.sh@1320 -- # shift 00:26:26.289 17:51:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:26.289 17:51:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.289 17:51:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.289 17:51:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.289 17:51:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.289 17:51:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.289 17:51:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:26.289 17:51:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.548 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:26.548 fio-3.35 00:26:26.548 Starting 1 thread 00:26:26.548 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.117 00:26:29.117 test: (groupid=0, jobs=1): err= 0: pid=742646: Wed Jul 24 17:51:50 2024 00:26:29.117 read: IOPS=7715, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2007msec) 00:26:29.117 slat (nsec): min=1590, max=104691, avg=1733.94, stdev=1103.74 00:26:29.117 clat (usec): min=4677, max=22003, avg=9470.88, stdev=2042.35 00:26:29.117 lat (usec): min=4682, max=22006, avg=9472.62, stdev=2042.36 00:26:29.117 clat percentiles (usec): 00:26:29.117 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8094], 00:26:29.117 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:26:29.117 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11994], 95.00th=[13829], 00:26:29.117 | 99.00th=[17433], 99.50th=[18220], 99.90th=[20579], 99.95th=[21890], 00:26:29.117 | 99.99th=[21890] 00:26:29.117 bw ( KiB/s): min=29216, max=31880, per=99.92%, avg=30838.00, stdev=1207.87, samples=4 00:26:29.117 iops : min= 7304, max= 7970, avg=7709.50, stdev=301.97, samples=4 00:26:29.117 write: IOPS=7707, BW=30.1MiB/s (31.6MB/s)(60.4MiB/2007msec); 0 zone resets 00:26:29.117 slat (nsec): min=1658, max=83828, avg=1819.78, stdev=795.05 00:26:29.117 clat (usec): min=3072, max=14659, avg=7006.56, stdev=1240.89 00:26:29.117 lat (usec): min=3074, max=14664, avg=7008.38, stdev=1240.94 00:26:29.117 clat percentiles (usec): 00:26:29.117 | 1.00th=[ 4293], 5.00th=[ 5080], 10.00th=[ 5538], 20.00th=[ 6128], 00:26:29.117 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7177], 00:26:29.117 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9110], 00:26:29.117 | 99.00th=[10814], 99.50th=[11863], 99.90th=[13960], 99.95th=[14353], 00:26:29.117 | 99.99th=[14615] 00:26:29.117 bw ( KiB/s): min=30344, max=31232, per=99.96%, avg=30818.00, stdev=389.15, samples=4 00:26:29.117 iops : min= 7586, max= 7808, avg=7704.50, stdev=97.29, samples=4 00:26:29.117 lat (msec) : 4=0.26%, 10=86.10%, 20=13.57%, 50=0.08% 00:26:29.117 cpu : usr=65.30%, sys=28.27%, ctx=27, majf=0, minf=4 00:26:29.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:29.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:29.118 issued rwts: total=15486,15469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:29.118 00:26:29.118 Run status group 0 (all jobs): 00:26:29.118 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2007-2007msec 00:26:29.118 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.4MiB (63.4MB), run=2007-2007msec 00:26:29.118 17:51:50 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:29.118 17:51:50 -- host/fio.sh@74 -- # sync 00:26:29.118 17:51:50 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:26:33.305 17:51:54 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:33.305 17:51:54 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:26:35.923 17:51:57 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:35.923 17:51:57 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:26:37.829 17:51:59 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:37.829 17:51:59 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:37.829 17:51:59 -- host/fio.sh@86 -- # nvmftestfini 00:26:37.829 17:51:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.829 17:51:59 -- nvmf/common.sh@116 -- # sync 00:26:37.829 17:51:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:37.829 17:51:59 -- nvmf/common.sh@119 -- # set +e 00:26:37.829 17:51:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:37.829 17:51:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:37.829 rmmod nvme_tcp 00:26:37.829 rmmod nvme_fabrics 00:26:37.829 rmmod nvme_keyring 00:26:37.829 17:51:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:37.829 17:51:59 -- nvmf/common.sh@123 -- # set -e 00:26:37.829 17:51:59 -- nvmf/common.sh@124 -- # return 0 00:26:37.829 17:51:59 -- nvmf/common.sh@477 -- # '[' -n 738849 ']' 00:26:37.829 17:51:59 -- nvmf/common.sh@478 -- # killprocess 738849 00:26:37.829 17:51:59 -- common/autotest_common.sh@926 -- # '[' -z 738849 ']' 00:26:37.829 17:51:59 -- common/autotest_common.sh@930 -- # kill -0 738849 00:26:37.829 17:51:59 -- common/autotest_common.sh@931 -- # uname 00:26:37.829 17:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.829 17:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 738849 00:26:37.829 17:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:37.829 17:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:37.829 17:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 738849' 00:26:37.829 killing process with pid 738849 00:26:37.829 17:51:59 -- common/autotest_common.sh@945 -- # kill 738849 00:26:37.829 17:51:59 -- common/autotest_common.sh@950 -- # wait 738849 00:26:38.088 17:51:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:38.088 17:51:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:38.088 17:51:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:38.088 17:51:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.088 17:51:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:38.088 17:51:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.088 17:51:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.088 17:51:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.637 17:52:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:40.637 00:26:40.637 real 0m39.129s 00:26:40.637 user 2m36.407s 00:26:40.637 sys 0m8.096s 00:26:40.637 17:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.637 17:52:01 -- common/autotest_common.sh@10 -- # set +x 00:26:40.637 ************************************ 00:26:40.637 END TEST nvmf_fio_host 00:26:40.637 ************************************ 00:26:40.637 17:52:01 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:40.637 17:52:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:40.637 17:52:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.637 17:52:01 -- common/autotest_common.sh@10 -- # set +x 00:26:40.637 ************************************ 00:26:40.637 START TEST nvmf_failover 00:26:40.637 ************************************ 00:26:40.637 17:52:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:40.637 * Looking for test storage... 00:26:40.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.637 17:52:01 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.637 17:52:01 -- nvmf/common.sh@7 -- # uname -s 00:26:40.637 17:52:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.637 17:52:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.637 17:52:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.637 17:52:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.637 17:52:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.637 17:52:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.637 17:52:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.637 17:52:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.637 17:52:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.637 17:52:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.637 17:52:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.637 17:52:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.637 17:52:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.637 17:52:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.637 17:52:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.637 17:52:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.637 17:52:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.637 17:52:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.637 17:52:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.637 17:52:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.638 17:52:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.638 17:52:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.638 17:52:01 -- paths/export.sh@5 -- # export PATH 00:26:40.638 17:52:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.638 17:52:01 -- nvmf/common.sh@46 -- # : 0 00:26:40.638 17:52:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:40.638 17:52:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:40.638 17:52:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:40.638 17:52:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.638 17:52:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.638 17:52:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:40.638 17:52:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:40.638 17:52:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:40.638 17:52:01 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:40.638 17:52:01 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:40.638 17:52:01 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:40.638 17:52:01 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:40.638 17:52:01 -- host/failover.sh@18 -- # nvmftestinit 00:26:40.638 17:52:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:40.638 17:52:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.638 17:52:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:40.638 17:52:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:40.638 17:52:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:40.638 17:52:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.638 17:52:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.638 17:52:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.638 17:52:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:40.638 17:52:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:40.638 17:52:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:40.638 17:52:01 -- common/autotest_common.sh@10 -- # set +x 00:26:45.915 17:52:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:45.915 17:52:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:45.915 17:52:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:45.915 17:52:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:45.915 17:52:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:45.915 17:52:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:45.915 17:52:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:45.915 17:52:06 -- nvmf/common.sh@294 -- # net_devs=() 00:26:45.915 17:52:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:45.915 17:52:06 -- nvmf/common.sh@295 -- # e810=() 00:26:45.915 17:52:06 -- nvmf/common.sh@295 -- # local -ga e810 00:26:45.915 17:52:06 -- nvmf/common.sh@296 -- # x722=() 00:26:45.915 17:52:06 -- nvmf/common.sh@296 -- # local -ga x722 00:26:45.915 17:52:06 -- nvmf/common.sh@297 -- # mlx=() 00:26:45.915 17:52:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:45.915 17:52:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.915 17:52:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:45.915 17:52:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:45.915 17:52:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:45.915 17:52:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:45.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:45.915 17:52:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:45.915 17:52:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:45.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:45.915 17:52:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:45.915 17:52:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.915 17:52:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.915 17:52:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:45.915 Found net devices under 0000:86:00.0: cvl_0_0 00:26:45.915 17:52:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.915 17:52:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:45.915 17:52:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.915 17:52:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.915 17:52:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:45.915 Found net devices under 0000:86:00.1: cvl_0_1 00:26:45.915 17:52:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.915 17:52:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:45.915 17:52:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:45.915 17:52:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:45.915 17:52:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.915 17:52:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.915 17:52:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.915 17:52:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:45.915 17:52:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.915 17:52:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.915 17:52:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:45.915 17:52:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.915 17:52:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.915 17:52:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:45.915 17:52:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:45.915 17:52:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.915 17:52:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.915 17:52:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.915 17:52:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.915 17:52:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:45.915 17:52:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.915 17:52:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.915 17:52:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.915 17:52:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:45.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:26:45.915 00:26:45.915 --- 10.0.0.2 ping statistics --- 00:26:45.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.915 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:45.915 17:52:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:26:45.915 00:26:45.915 --- 10.0.0.1 ping statistics --- 00:26:45.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.915 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:45.915 17:52:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.915 17:52:07 -- nvmf/common.sh@410 -- # return 0 00:26:45.915 17:52:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:45.915 17:52:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.915 17:52:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:45.915 17:52:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:45.915 17:52:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.915 17:52:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:45.915 17:52:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:45.915 17:52:07 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:45.915 17:52:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:45.915 17:52:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:45.915 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:26:45.915 17:52:07 -- nvmf/common.sh@469 -- # nvmfpid=747852 00:26:45.915 17:52:07 -- nvmf/common.sh@470 -- # waitforlisten 747852 00:26:45.915 17:52:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:45.915 17:52:07 -- common/autotest_common.sh@819 -- # '[' -z 747852 ']' 00:26:45.915 17:52:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.915 17:52:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:45.915 17:52:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.915 17:52:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:45.915 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:26:45.915 [2024-07-24 17:52:07.194655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:45.915 [2024-07-24 17:52:07.194699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.915 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.915 [2024-07-24 17:52:07.254467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:45.915 [2024-07-24 17:52:07.325967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:45.915 [2024-07-24 17:52:07.326093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.915 [2024-07-24 17:52:07.326101] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.915 [2024-07-24 17:52:07.326107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.915 [2024-07-24 17:52:07.326208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.915 [2024-07-24 17:52:07.326292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.915 [2024-07-24 17:52:07.326293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.481 17:52:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:46.481 17:52:07 -- common/autotest_common.sh@852 -- # return 0 00:26:46.481 17:52:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:46.481 17:52:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.481 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:26:46.481 17:52:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.481 17:52:08 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:46.740 [2024-07-24 17:52:08.179280] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.740 17:52:08 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:46.999 Malloc0 00:26:46.999 17:52:08 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.999 17:52:08 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.258 17:52:08 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.517 [2024-07-24 17:52:08.926539] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.517 17:52:08 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:47.517 [2024-07-24 17:52:09.099040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:47.776 17:52:09 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:47.776 [2024-07-24 17:52:09.263622] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:47.776 17:52:09 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:47.776 17:52:09 -- host/failover.sh@31 -- # bdevperf_pid=748304 00:26:47.776 17:52:09 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.776 17:52:09 -- host/failover.sh@34 -- # waitforlisten 748304 /var/tmp/bdevperf.sock 00:26:47.776 17:52:09 -- common/autotest_common.sh@819 -- # '[' -z 748304 ']' 00:26:47.776 17:52:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.776 17:52:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:47.776 17:52:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.776 17:52:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:47.776 17:52:09 -- common/autotest_common.sh@10 -- # set +x 00:26:48.712 17:52:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:48.712 17:52:10 -- common/autotest_common.sh@852 -- # return 0 00:26:48.712 17:52:10 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.971 NVMe0n1 00:26:48.971 17:52:10 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:49.538 00:26:49.538 17:52:10 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.538 17:52:10 -- host/failover.sh@39 -- # run_test_pid=748544 00:26:49.538 17:52:10 -- host/failover.sh@41 -- # sleep 1 00:26:50.475 17:52:11 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.475 [2024-07-24 17:52:12.047003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.475 [2024-07-24 17:52:12.047219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.476 [2024-07-24 17:52:12.047649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a06600 is same with the state(5) to be set 00:26:50.735 17:52:12 -- host/failover.sh@45 -- # sleep 3 00:26:54.028 17:52:15 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.028 00:26:54.028 17:52:15 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:54.028 [2024-07-24 17:52:15.571607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.028 [2024-07-24 17:52:15.571729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 [2024-07-24 17:52:15.571883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a07240 is same with the state(5) to be set 00:26:54.029 17:52:15 -- host/failover.sh@50 -- # sleep 3 00:26:57.356 17:52:18 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.356 [2024-07-24 17:52:18.764217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.356 17:52:18 -- host/failover.sh@55 -- # sleep 1 00:26:58.295 17:52:19 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:58.555 [2024-07-24 17:52:19.956817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.956998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.555 [2024-07-24 17:52:19.957053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 [2024-07-24 17:52:19.957310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861c10 is same with the state(5) to be set 00:26:58.556 17:52:19 -- host/failover.sh@59 -- # wait 748544 00:27:05.139 0 00:27:05.139 17:52:26 -- host/failover.sh@61 -- # killprocess 748304 00:27:05.139 17:52:26 -- common/autotest_common.sh@926 -- # '[' -z 748304 ']' 00:27:05.139 17:52:26 -- common/autotest_common.sh@930 -- # kill -0 748304 00:27:05.139 17:52:26 -- common/autotest_common.sh@931 -- # uname 00:27:05.139 17:52:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:05.139 17:52:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 748304 00:27:05.139 17:52:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:05.139 17:52:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:05.139 17:52:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 748304' 00:27:05.139 killing process with pid 748304 00:27:05.139 17:52:26 -- common/autotest_common.sh@945 -- # kill 748304 00:27:05.139 17:52:26 -- common/autotest_common.sh@950 -- # wait 748304 00:27:05.139 17:52:26 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.139 [2024-07-24 17:52:09.335148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:05.139 [2024-07-24 17:52:09.335199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748304 ] 00:27:05.139 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.139 [2024-07-24 17:52:09.389251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.139 [2024-07-24 17:52:09.462141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.139 Running I/O for 15 seconds... 00:27:05.139 [2024-07-24 17:52:12.048631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.048982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.048993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.140 [2024-07-24 17:52:12.049597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.140 [2024-07-24 17:52:12.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.049988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.049999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.141 [2024-07-24 17:52:12.050233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.141 [2024-07-24 17:52:12.050256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.141 [2024-07-24 17:52:12.050278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.141 [2024-07-24 17:52:12.050300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.141 [2024-07-24 17:52:12.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.141 [2024-07-24 17:52:12.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.141 [2024-07-24 17:52:12.050499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.050772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.050999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.142 [2024-07-24 17:52:12.051255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.142 [2024-07-24 17:52:12.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.142 [2024-07-24 17:52:12.051412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:12.051595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9010 is same with the state(5) to be set 00:27:05.143 [2024-07-24 17:52:12.051618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.143 [2024-07-24 17:52:12.051628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.143 [2024-07-24 17:52:12.051643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:8 PRP1 0x0 PRP2 0x0 00:27:05.143 [2024-07-24 17:52:12.051653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051703] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb9010 was disconnected and freed. reset controller. 00:27:05.143 [2024-07-24 17:52:12.051721] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:05.143 [2024-07-24 17:52:12.051752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.143 [2024-07-24 17:52:12.051764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.143 [2024-07-24 17:52:12.051786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.143 [2024-07-24 17:52:12.051808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.143 [2024-07-24 17:52:12.051829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:12.051838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.143 [2024-07-24 17:52:12.054112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.143 [2024-07-24 17:52:12.054151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:27:05.143 [2024-07-24 17:52:12.165866] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.143 [2024-07-24 17:52:15.572069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.143 [2024-07-24 17:52:15.572558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.143 [2024-07-24 17:52:15.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.143 [2024-07-24 17:52:15.572594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.143 [2024-07-24 17:52:15.572604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.572971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.572982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.572993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.573068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.573090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.573158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.573200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.144 [2024-07-24 17:52:15.573246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.144 [2024-07-24 17:52:15.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.144 [2024-07-24 17:52:15.573329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.573975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.573987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.574019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.574070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.574092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.145 [2024-07-24 17:52:15.574115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.145 [2024-07-24 17:52:15.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.145 [2024-07-24 17:52:15.574228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.146 [2024-07-24 17:52:15.574785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.574985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.574998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.146 [2024-07-24 17:52:15.575008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.575020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf560 is same with the state(5) to be set 00:27:05.146 [2024-07-24 17:52:15.575032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.146 [2024-07-24 17:52:15.575047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.146 [2024-07-24 17:52:15.575058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3256 len:8 PRP1 0x0 PRP2 0x0 00:27:05.146 [2024-07-24 17:52:15.575066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.575115] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdcf560 was disconnected and freed. reset controller. 00:27:05.146 [2024-07-24 17:52:15.575130] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:05.146 [2024-07-24 17:52:15.575158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.146 [2024-07-24 17:52:15.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.575182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.146 [2024-07-24 17:52:15.575192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.146 [2024-07-24 17:52:15.575204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.146 [2024-07-24 17:52:15.575214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:15.575228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.147 [2024-07-24 17:52:15.575238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:15.575248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.147 [2024-07-24 17:52:15.577465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.147 [2024-07-24 17:52:15.577503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:27:05.147 [2024-07-24 17:52:15.608216] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.147 [2024-07-24 17:52:19.957494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.957979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.957990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.147 [2024-07-24 17:52:19.958375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.147 [2024-07-24 17:52:19.958386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.958954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.958989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.958999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.959021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.148 [2024-07-24 17:52:19.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.959075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.959097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.959123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.148 [2024-07-24 17:52:19.959145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.148 [2024-07-24 17:52:19.959157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.959957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.959984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.959997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.960007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.960018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.149 [2024-07-24 17:52:19.960029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.149 [2024-07-24 17:52:19.960040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.149 [2024-07-24 17:52:19.960057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.150 [2024-07-24 17:52:19.960268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.150 [2024-07-24 17:52:19.960446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbec40 is same with the state(5) to be set 00:27:05.150 [2024-07-24 17:52:19.960469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.150 [2024-07-24 17:52:19.960478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.150 [2024-07-24 17:52:19.960488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:27:05.150 [2024-07-24 17:52:19.960498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960550] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdbec40 was disconnected and freed. reset controller. 00:27:05.150 [2024-07-24 17:52:19.960563] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:05.150 [2024-07-24 17:52:19.960594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.150 [2024-07-24 17:52:19.960607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.150 [2024-07-24 17:52:19.960631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.150 [2024-07-24 17:52:19.960653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.150 [2024-07-24 17:52:19.960675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.150 [2024-07-24 17:52:19.960686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.150 [2024-07-24 17:52:19.960715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc3010 (9): Bad file descriptor 00:27:05.150 [2024-07-24 17:52:19.962695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.150 [2024-07-24 17:52:20.122765] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.150 00:27:05.150 Latency(us) 00:27:05.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.150 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.150 Verification LBA range: start 0x0 length 0x4000 00:27:05.150 NVMe0n1 : 15.00 16221.56 63.37 1425.50 0.00 7240.21 1089.89 22681.15 00:27:05.150 =================================================================================================================== 00:27:05.150 Total : 16221.56 63.37 1425.50 0.00 7240.21 1089.89 22681.15 00:27:05.150 Received shutdown signal, test time was about 15.000000 seconds 00:27:05.150 00:27:05.150 Latency(us) 00:27:05.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.150 =================================================================================================================== 00:27:05.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.150 17:52:26 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:05.150 17:52:26 -- host/failover.sh@65 -- # count=3 00:27:05.150 17:52:26 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:05.150 17:52:26 -- host/failover.sh@73 -- # bdevperf_pid=751124 00:27:05.150 17:52:26 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:05.150 17:52:26 -- host/failover.sh@75 -- # waitforlisten 751124 /var/tmp/bdevperf.sock 00:27:05.150 17:52:26 -- common/autotest_common.sh@819 -- # '[' -z 751124 ']' 00:27:05.150 17:52:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:05.150 17:52:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:05.150 17:52:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:05.150 17:52:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:05.150 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:27:05.721 17:52:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.721 17:52:27 -- common/autotest_common.sh@852 -- # return 0 00:27:05.721 17:52:27 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:05.721 [2024-07-24 17:52:27.281466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:05.721 17:52:27 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:05.981 [2024-07-24 17:52:27.470030] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:05.981 17:52:27 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:06.241 NVMe0n1 00:27:06.241 17:52:27 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:06.501 00:27:06.501 17:52:28 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:06.761 00:27:07.021 17:52:28 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:07.021 17:52:28 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:07.021 17:52:28 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.281 17:52:28 -- host/failover.sh@87 -- # sleep 3 00:27:10.577 17:52:31 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:10.577 17:52:31 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:10.577 17:52:31 -- host/failover.sh@90 -- # run_test_pid=752066 00:27:10.577 17:52:31 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:10.577 17:52:31 -- host/failover.sh@92 -- # wait 752066 00:27:11.514 0 00:27:11.514 17:52:33 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:11.514 [2024-07-24 17:52:26.323921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:11.514 [2024-07-24 17:52:26.323975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751124 ] 00:27:11.514 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.514 [2024-07-24 17:52:26.378235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.514 [2024-07-24 17:52:26.445040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.514 [2024-07-24 17:52:28.694257] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:11.514 [2024-07-24 17:52:28.694305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.514 [2024-07-24 17:52:28.694320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.514 [2024-07-24 17:52:28.694330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.515 [2024-07-24 17:52:28.694340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.515 [2024-07-24 17:52:28.694350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.515 [2024-07-24 17:52:28.694359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.515 [2024-07-24 17:52:28.694372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.515 [2024-07-24 17:52:28.694382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.515 [2024-07-24 17:52:28.694391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.515 [2024-07-24 17:52:28.694418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:11.515 [2024-07-24 17:52:28.694438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e9010 (9): Bad file descriptor 00:27:11.515 [2024-07-24 17:52:28.707658] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:11.515 Running I/O for 1 seconds... 00:27:11.515 00:27:11.515 Latency(us) 00:27:11.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.515 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:11.515 Verification LBA range: start 0x0 length 0x4000 00:27:11.515 NVMe0n1 : 1.01 16301.13 63.68 0.00 0.00 7818.75 1253.73 23592.96 00:27:11.515 =================================================================================================================== 00:27:11.515 Total : 16301.13 63.68 0.00 0.00 7818.75 1253.73 23592.96 00:27:11.515 17:52:33 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.515 17:52:33 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:11.774 17:52:33 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:12.034 17:52:33 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:12.034 17:52:33 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:12.034 17:52:33 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:12.294 17:52:33 -- host/failover.sh@101 -- # sleep 3 00:27:15.590 17:52:36 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:15.590 17:52:36 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:15.590 17:52:36 -- host/failover.sh@108 -- # killprocess 751124 00:27:15.590 17:52:36 -- common/autotest_common.sh@926 -- # '[' -z 751124 ']' 00:27:15.590 17:52:36 -- common/autotest_common.sh@930 -- # kill -0 751124 00:27:15.590 17:52:36 -- common/autotest_common.sh@931 -- # uname 00:27:15.590 17:52:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:15.590 17:52:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 751124 00:27:15.590 17:52:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:15.590 17:52:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:15.590 17:52:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 751124' 00:27:15.590 killing process with pid 751124 00:27:15.590 17:52:36 -- common/autotest_common.sh@945 -- # kill 751124 00:27:15.590 17:52:36 -- common/autotest_common.sh@950 -- # wait 751124 00:27:15.850 17:52:37 -- host/failover.sh@110 -- # sync 00:27:15.850 17:52:37 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.850 17:52:37 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:15.850 17:52:37 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:15.850 17:52:37 -- host/failover.sh@116 -- # nvmftestfini 00:27:15.850 17:52:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:15.850 17:52:37 -- nvmf/common.sh@116 -- # sync 00:27:15.850 17:52:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:15.850 17:52:37 -- nvmf/common.sh@119 -- # set +e 00:27:15.850 17:52:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:15.850 17:52:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:15.850 rmmod nvme_tcp 00:27:15.850 rmmod nvme_fabrics 00:27:15.850 rmmod nvme_keyring 00:27:15.850 17:52:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:15.850 17:52:37 -- nvmf/common.sh@123 -- # set -e 00:27:15.850 17:52:37 -- nvmf/common.sh@124 -- # return 0 00:27:15.850 17:52:37 -- nvmf/common.sh@477 -- # '[' -n 747852 ']' 00:27:15.850 17:52:37 -- nvmf/common.sh@478 -- # killprocess 747852 00:27:15.850 17:52:37 -- common/autotest_common.sh@926 -- # '[' -z 747852 ']' 00:27:15.850 17:52:37 -- common/autotest_common.sh@930 -- # kill -0 747852 00:27:15.850 17:52:37 -- common/autotest_common.sh@931 -- # uname 00:27:16.111 17:52:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:16.111 17:52:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 747852 00:27:16.111 17:52:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:16.111 17:52:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:16.111 17:52:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 747852' 00:27:16.111 killing process with pid 747852 00:27:16.111 17:52:37 -- common/autotest_common.sh@945 -- # kill 747852 00:27:16.111 17:52:37 -- common/autotest_common.sh@950 -- # wait 747852 00:27:16.371 17:52:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:16.371 17:52:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:16.371 17:52:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:16.371 17:52:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.371 17:52:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:16.371 17:52:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.371 17:52:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.371 17:52:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.347 17:52:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:18.347 00:27:18.347 real 0m38.119s 00:27:18.347 user 2m2.962s 00:27:18.347 sys 0m7.475s 00:27:18.347 17:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.347 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:27:18.347 ************************************ 00:27:18.347 END TEST nvmf_failover 00:27:18.347 ************************************ 00:27:18.347 17:52:39 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:18.347 17:52:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:18.347 17:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.347 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:27:18.347 ************************************ 00:27:18.347 START TEST nvmf_discovery 00:27:18.347 ************************************ 00:27:18.347 17:52:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:18.347 * Looking for test storage... 00:27:18.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.347 17:52:39 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.347 17:52:39 -- nvmf/common.sh@7 -- # uname -s 00:27:18.347 17:52:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.347 17:52:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.347 17:52:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.347 17:52:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.347 17:52:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.347 17:52:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.347 17:52:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.347 17:52:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.347 17:52:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.347 17:52:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.347 17:52:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.347 17:52:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.347 17:52:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.347 17:52:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.347 17:52:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.347 17:52:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.347 17:52:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.347 17:52:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.347 17:52:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.347 17:52:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.347 17:52:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.347 17:52:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.347 17:52:39 -- paths/export.sh@5 -- # export PATH 00:27:18.347 17:52:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.347 17:52:39 -- nvmf/common.sh@46 -- # : 0 00:27:18.347 17:52:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:18.347 17:52:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:18.347 17:52:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:18.347 17:52:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.347 17:52:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.347 17:52:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:18.347 17:52:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:18.348 17:52:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:18.348 17:52:39 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:18.348 17:52:39 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:18.348 17:52:39 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:18.348 17:52:39 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:18.348 17:52:39 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:18.348 17:52:39 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:18.348 17:52:39 -- host/discovery.sh@25 -- # nvmftestinit 00:27:18.348 17:52:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:18.348 17:52:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.348 17:52:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:18.348 17:52:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:18.348 17:52:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:18.348 17:52:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.348 17:52:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.348 17:52:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.348 17:52:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:18.348 17:52:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:18.348 17:52:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:18.348 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:27:24.928 17:52:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:24.928 17:52:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:24.928 17:52:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:24.928 17:52:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:24.928 17:52:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:24.928 17:52:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:24.928 17:52:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:24.928 17:52:45 -- nvmf/common.sh@294 -- # net_devs=() 00:27:24.928 17:52:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:24.928 17:52:45 -- nvmf/common.sh@295 -- # e810=() 00:27:24.928 17:52:45 -- nvmf/common.sh@295 -- # local -ga e810 00:27:24.928 17:52:45 -- nvmf/common.sh@296 -- # x722=() 00:27:24.928 17:52:45 -- nvmf/common.sh@296 -- # local -ga x722 00:27:24.928 17:52:45 -- nvmf/common.sh@297 -- # mlx=() 00:27:24.928 17:52:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:24.928 17:52:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.928 17:52:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:24.928 17:52:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:24.928 17:52:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:24.928 17:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.928 17:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:24.928 17:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.928 17:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:24.928 17:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.928 17:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.928 17:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.928 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.928 17:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.928 17:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:24.928 17:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.928 17:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.928 17:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.928 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.928 17:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.928 17:52:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:24.928 17:52:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:24.928 17:52:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:24.928 17:52:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.928 17:52:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.928 17:52:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.928 17:52:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:24.928 17:52:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.928 17:52:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.928 17:52:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:24.928 17:52:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.928 17:52:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.928 17:52:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:24.928 17:52:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:24.928 17:52:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.928 17:52:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.928 17:52:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.928 17:52:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.928 17:52:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:24.928 17:52:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.928 17:52:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.928 17:52:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.928 17:52:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:24.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:27:24.929 00:27:24.929 --- 10.0.0.2 ping statistics --- 00:27:24.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.929 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:27:24.929 17:52:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:27:24.929 00:27:24.929 --- 10.0.0.1 ping statistics --- 00:27:24.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.929 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:24.929 17:52:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.929 17:52:45 -- nvmf/common.sh@410 -- # return 0 00:27:24.929 17:52:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:24.929 17:52:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.929 17:52:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:24.929 17:52:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:24.929 17:52:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.929 17:52:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:24.929 17:52:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:24.929 17:52:45 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:24.929 17:52:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:24.929 17:52:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:24.929 17:52:45 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 17:52:45 -- nvmf/common.sh@469 -- # nvmfpid=756324 00:27:24.929 17:52:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:24.929 17:52:45 -- nvmf/common.sh@470 -- # waitforlisten 756324 00:27:24.929 17:52:45 -- common/autotest_common.sh@819 -- # '[' -z 756324 ']' 00:27:24.929 17:52:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.929 17:52:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:24.929 17:52:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.929 17:52:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:24.929 17:52:45 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 [2024-07-24 17:52:45.596795] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:24.929 [2024-07-24 17:52:45.596838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.929 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.929 [2024-07-24 17:52:45.653686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.929 [2024-07-24 17:52:45.731104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.929 [2024-07-24 17:52:45.731212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.929 [2024-07-24 17:52:45.731220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.929 [2024-07-24 17:52:45.731227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.929 [2024-07-24 17:52:45.731242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.929 17:52:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:24.929 17:52:46 -- common/autotest_common.sh@852 -- # return 0 00:27:24.929 17:52:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:24.929 17:52:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 17:52:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.929 17:52:46 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.929 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 [2024-07-24 17:52:46.419053] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.929 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.929 17:52:46 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:24.929 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 [2024-07-24 17:52:46.427219] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:24.929 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.929 17:52:46 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:24.929 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 null0 00:27:24.929 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.929 17:52:46 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:24.929 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 null1 00:27:24.929 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.929 17:52:46 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:24.929 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:24.929 17:52:46 -- host/discovery.sh@45 -- # hostpid=756569 00:27:24.929 17:52:46 -- host/discovery.sh@46 -- # waitforlisten 756569 /tmp/host.sock 00:27:24.929 17:52:46 -- common/autotest_common.sh@819 -- # '[' -z 756569 ']' 00:27:24.929 17:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:27:24.929 17:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:24.929 17:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:24.929 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:24.929 17:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:24.929 17:52:46 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:24.929 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:27:24.929 [2024-07-24 17:52:46.498059] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:24.929 [2024-07-24 17:52:46.498100] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756569 ] 00:27:24.929 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.190 [2024-07-24 17:52:46.550893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.190 [2024-07-24 17:52:46.632898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.190 [2024-07-24 17:52:46.633029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.760 17:52:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:25.760 17:52:47 -- common/autotest_common.sh@852 -- # return 0 00:27:25.760 17:52:47 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:25.760 17:52:47 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:25.760 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.760 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:25.760 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.760 17:52:47 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:25.760 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.760 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:25.760 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.760 17:52:47 -- host/discovery.sh@72 -- # notify_id=0 00:27:25.760 17:52:47 -- host/discovery.sh@78 -- # get_subsystem_names 00:27:25.760 17:52:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.760 17:52:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.760 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.760 17:52:47 -- host/discovery.sh@59 -- # sort 00:27:25.760 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:25.760 17:52:47 -- host/discovery.sh@59 -- # xargs 00:27:25.760 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.020 17:52:47 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:27:26.020 17:52:47 -- host/discovery.sh@79 -- # get_bdev_list 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # sort 00:27:26.020 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # xargs 00:27:26.020 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.020 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.020 17:52:47 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:27:26.020 17:52:47 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:26.020 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.020 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.020 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.020 17:52:47 -- host/discovery.sh@82 -- # get_subsystem_names 00:27:26.020 17:52:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.020 17:52:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.020 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.020 17:52:47 -- host/discovery.sh@59 -- # sort 00:27:26.020 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.020 17:52:47 -- host/discovery.sh@59 -- # xargs 00:27:26.020 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.020 17:52:47 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:27:26.020 17:52:47 -- host/discovery.sh@83 -- # get_bdev_list 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.020 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.020 17:52:47 -- host/discovery.sh@55 -- # sort 00:27:26.021 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.021 17:52:47 -- host/discovery.sh@55 -- # xargs 00:27:26.021 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.021 17:52:47 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:26.021 17:52:47 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:26.021 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.021 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.021 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.021 17:52:47 -- host/discovery.sh@86 -- # get_subsystem_names 00:27:26.021 17:52:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.021 17:52:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.021 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.021 17:52:47 -- host/discovery.sh@59 -- # sort 00:27:26.021 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.021 17:52:47 -- host/discovery.sh@59 -- # xargs 00:27:26.021 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.021 17:52:47 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:27:26.021 17:52:47 -- host/discovery.sh@87 -- # get_bdev_list 00:27:26.021 17:52:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.021 17:52:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.021 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.021 17:52:47 -- host/discovery.sh@55 -- # sort 00:27:26.021 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.021 17:52:47 -- host/discovery.sh@55 -- # xargs 00:27:26.021 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:26.281 17:52:47 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.281 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.281 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.281 [2024-07-24 17:52:47.634425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.281 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@92 -- # get_subsystem_names 00:27:26.281 17:52:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.281 17:52:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.281 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.281 17:52:47 -- host/discovery.sh@59 -- # sort 00:27:26.281 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.281 17:52:47 -- host/discovery.sh@59 -- # xargs 00:27:26.281 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:26.281 17:52:47 -- host/discovery.sh@93 -- # get_bdev_list 00:27:26.281 17:52:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.281 17:52:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.281 17:52:47 -- host/discovery.sh@55 -- # sort 00:27:26.281 17:52:47 -- host/discovery.sh@55 -- # xargs 00:27:26.281 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.281 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.281 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:27:26.281 17:52:47 -- host/discovery.sh@94 -- # get_notification_count 00:27:26.281 17:52:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:26.281 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.281 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.281 17:52:47 -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.281 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@74 -- # notification_count=0 00:27:26.281 17:52:47 -- host/discovery.sh@75 -- # notify_id=0 00:27:26.281 17:52:47 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:26.281 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.281 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:26.281 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.281 17:52:47 -- host/discovery.sh@100 -- # sleep 1 00:27:26.851 [2024-07-24 17:52:48.359283] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.851 [2024-07-24 17:52:48.359312] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.851 [2024-07-24 17:52:48.359329] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.851 [2024-07-24 17:52:48.447589] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:27.111 [2024-07-24 17:52:48.550206] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:27.111 [2024-07-24 17:52:48.550228] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:27.371 17:52:48 -- host/discovery.sh@101 -- # get_subsystem_names 00:27:27.371 17:52:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.371 17:52:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:27.371 17:52:48 -- host/discovery.sh@59 -- # sort 00:27:27.371 17:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.371 17:52:48 -- host/discovery.sh@59 -- # xargs 00:27:27.371 17:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.371 17:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@102 -- # get_bdev_list 00:27:27.371 17:52:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.371 17:52:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.371 17:52:48 -- host/discovery.sh@55 -- # xargs 00:27:27.371 17:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.371 17:52:48 -- host/discovery.sh@55 -- # sort 00:27:27.371 17:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.371 17:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:27:27.371 17:52:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:27.371 17:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.371 17:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.371 17:52:48 -- host/discovery.sh@63 -- # xargs 00:27:27.371 17:52:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:27.371 17:52:48 -- host/discovery.sh@63 -- # sort -n 00:27:27.371 17:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:27:27.371 17:52:48 -- host/discovery.sh@104 -- # get_notification_count 00:27:27.371 17:52:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:27.371 17:52:48 -- host/discovery.sh@74 -- # jq '. | length' 00:27:27.371 17:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.371 17:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.371 17:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.631 17:52:48 -- host/discovery.sh@74 -- # notification_count=1 00:27:27.631 17:52:48 -- host/discovery.sh@75 -- # notify_id=1 00:27:27.631 17:52:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:27:27.631 17:52:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:27.631 17:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.631 17:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.631 17:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.631 17:52:48 -- host/discovery.sh@109 -- # sleep 1 00:27:28.571 17:52:49 -- host/discovery.sh@110 -- # get_bdev_list 00:27:28.571 17:52:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.571 17:52:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.571 17:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.571 17:52:49 -- host/discovery.sh@55 -- # sort 00:27:28.571 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:27:28.571 17:52:49 -- host/discovery.sh@55 -- # xargs 00:27:28.571 17:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.571 17:52:50 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:28.571 17:52:50 -- host/discovery.sh@111 -- # get_notification_count 00:27:28.571 17:52:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:28.571 17:52:50 -- host/discovery.sh@74 -- # jq '. | length' 00:27:28.571 17:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.571 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:27:28.571 17:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.571 17:52:50 -- host/discovery.sh@74 -- # notification_count=1 00:27:28.571 17:52:50 -- host/discovery.sh@75 -- # notify_id=2 00:27:28.571 17:52:50 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:27:28.571 17:52:50 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:28.571 17:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.571 17:52:50 -- common/autotest_common.sh@10 -- # set +x 00:27:28.571 [2024-07-24 17:52:50.097313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:28.571 [2024-07-24 17:52:50.098496] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:28.571 [2024-07-24 17:52:50.098523] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.571 17:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.571 17:52:50 -- host/discovery.sh@117 -- # sleep 1 00:27:28.831 [2024-07-24 17:52:50.225881] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:28.831 [2024-07-24 17:52:50.283741] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:28.831 [2024-07-24 17:52:50.283757] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:28.831 [2024-07-24 17:52:50.283763] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.772 17:52:51 -- host/discovery.sh@118 -- # get_subsystem_names 00:27:29.772 17:52:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.772 17:52:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.772 17:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.772 17:52:51 -- host/discovery.sh@59 -- # sort 00:27:29.772 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.772 17:52:51 -- host/discovery.sh@59 -- # xargs 00:27:29.772 17:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@119 -- # get_bdev_list 00:27:29.772 17:52:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.772 17:52:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.772 17:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.772 17:52:51 -- host/discovery.sh@55 -- # sort 00:27:29.772 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.772 17:52:51 -- host/discovery.sh@55 -- # xargs 00:27:29.772 17:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:27:29.772 17:52:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:29.772 17:52:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.772 17:52:51 -- host/discovery.sh@63 -- # sort -n 00:27:29.772 17:52:51 -- host/discovery.sh@63 -- # xargs 00:27:29.772 17:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.772 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.772 17:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@121 -- # get_notification_count 00:27:29.772 17:52:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.772 17:52:51 -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.772 17:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.772 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.772 17:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@74 -- # notification_count=0 00:27:29.772 17:52:51 -- host/discovery.sh@75 -- # notify_id=2 00:27:29.772 17:52:51 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.772 17:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.772 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.772 [2024-07-24 17:52:51.309788] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:29.772 [2024-07-24 17:52:51.309812] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.772 [2024-07-24 17:52:51.312657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.772 [2024-07-24 17:52:51.312674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.772 [2024-07-24 17:52:51.312685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.772 [2024-07-24 17:52:51.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.772 [2024-07-24 17:52:51.312709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.772 [2024-07-24 17:52:51.312719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.772 [2024-07-24 17:52:51.312729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.772 [2024-07-24 17:52:51.312739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.772 [2024-07-24 17:52:51.312749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:29.772 17:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.772 17:52:51 -- host/discovery.sh@127 -- # sleep 1 00:27:29.772 [2024-07-24 17:52:51.322669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:29.772 [2024-07-24 17:52:51.332709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.772 [2024-07-24 17:52:51.333213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.772 [2024-07-24 17:52:51.333713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.772 [2024-07-24 17:52:51.333727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:29.772 [2024-07-24 17:52:51.333738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:29.772 [2024-07-24 17:52:51.333755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:29.772 [2024-07-24 17:52:51.333779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.772 [2024-07-24 17:52:51.333790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.772 [2024-07-24 17:52:51.333801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.772 [2024-07-24 17:52:51.333817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.772 [2024-07-24 17:52:51.342766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.772 [2024-07-24 17:52:51.343255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.772 [2024-07-24 17:52:51.343733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.772 [2024-07-24 17:52:51.343745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:29.772 [2024-07-24 17:52:51.343757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:29.772 [2024-07-24 17:52:51.343782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:29.772 [2024-07-24 17:52:51.343803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.772 [2024-07-24 17:52:51.343813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.772 [2024-07-24 17:52:51.343823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.772 [2024-07-24 17:52:51.343844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.772 [2024-07-24 17:52:51.352818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.773 [2024-07-24 17:52:51.353245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.773 [2024-07-24 17:52:51.353710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.773 [2024-07-24 17:52:51.353725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:29.773 [2024-07-24 17:52:51.353736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:29.773 [2024-07-24 17:52:51.353788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:29.773 [2024-07-24 17:52:51.353803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.773 [2024-07-24 17:52:51.353813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.773 [2024-07-24 17:52:51.353823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.773 [2024-07-24 17:52:51.353837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.773 [2024-07-24 17:52:51.362870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.773 [2024-07-24 17:52:51.363342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.773 [2024-07-24 17:52:51.363815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.773 [2024-07-24 17:52:51.363827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:29.773 [2024-07-24 17:52:51.363838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:29.773 [2024-07-24 17:52:51.363864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:29.773 [2024-07-24 17:52:51.363886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.773 [2024-07-24 17:52:51.363896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.773 [2024-07-24 17:52:51.363906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.773 [2024-07-24 17:52:51.363920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.033 [2024-07-24 17:52:51.372925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:30.033 [2024-07-24 17:52:51.373371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.373805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.373817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:30.033 [2024-07-24 17:52:51.373828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:30.033 [2024-07-24 17:52:51.373843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:30.033 [2024-07-24 17:52:51.373865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:30.033 [2024-07-24 17:52:51.373876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:30.033 [2024-07-24 17:52:51.373886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:30.033 [2024-07-24 17:52:51.373900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.033 [2024-07-24 17:52:51.382976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:30.033 [2024-07-24 17:52:51.383412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.383914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.383926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:30.033 [2024-07-24 17:52:51.383941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:30.033 [2024-07-24 17:52:51.383956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:30.033 [2024-07-24 17:52:51.383984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:30.033 [2024-07-24 17:52:51.383995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:30.033 [2024-07-24 17:52:51.384004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:30.033 [2024-07-24 17:52:51.384018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.033 [2024-07-24 17:52:51.393027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:30.033 [2024-07-24 17:52:51.393560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.394058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-07-24 17:52:51.394071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6f9f0 with addr=10.0.0.2, port=4420 00:27:30.033 [2024-07-24 17:52:51.394082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6f9f0 is same with the state(5) to be set 00:27:30.033 [2024-07-24 17:52:51.394097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f9f0 (9): Bad file descriptor 00:27:30.033 [2024-07-24 17:52:51.394119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:30.033 [2024-07-24 17:52:51.394129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:30.033 [2024-07-24 17:52:51.394139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:30.033 [2024-07-24 17:52:51.394154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.033 [2024-07-24 17:52:51.399127] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:30.033 [2024-07-24 17:52:51.399144] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:30.973 17:52:52 -- host/discovery.sh@128 -- # get_subsystem_names 00:27:30.973 17:52:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:30.973 17:52:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:30.973 17:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.973 17:52:52 -- host/discovery.sh@59 -- # sort 00:27:30.973 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.973 17:52:52 -- host/discovery.sh@59 -- # xargs 00:27:30.973 17:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@129 -- # get_bdev_list 00:27:30.973 17:52:52 -- host/discovery.sh@55 -- # sort 00:27:30.973 17:52:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.973 17:52:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.973 17:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.973 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.973 17:52:52 -- host/discovery.sh@55 -- # xargs 00:27:30.973 17:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:27:30.973 17:52:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:30.973 17:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.973 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.973 17:52:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:30.973 17:52:52 -- host/discovery.sh@63 -- # sort -n 00:27:30.973 17:52:52 -- host/discovery.sh@63 -- # xargs 00:27:30.973 17:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@131 -- # get_notification_count 00:27:30.973 17:52:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:30.973 17:52:52 -- host/discovery.sh@74 -- # jq '. | length' 00:27:30.973 17:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.973 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.973 17:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@74 -- # notification_count=0 00:27:30.973 17:52:52 -- host/discovery.sh@75 -- # notify_id=2 00:27:30.973 17:52:52 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:30.973 17:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.973 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.973 17:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.973 17:52:52 -- host/discovery.sh@135 -- # sleep 1 00:27:32.374 17:52:53 -- host/discovery.sh@136 -- # get_subsystem_names 00:27:32.374 17:52:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.374 17:52:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.374 17:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.374 17:52:53 -- host/discovery.sh@59 -- # sort 00:27:32.374 17:52:53 -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 17:52:53 -- host/discovery.sh@59 -- # xargs 00:27:32.374 17:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.374 17:52:53 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:27:32.374 17:52:53 -- host/discovery.sh@137 -- # get_bdev_list 00:27:32.374 17:52:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.374 17:52:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:32.374 17:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.374 17:52:53 -- host/discovery.sh@55 -- # sort 00:27:32.374 17:52:53 -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 17:52:53 -- host/discovery.sh@55 -- # xargs 00:27:32.374 17:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.374 17:52:53 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:27:32.374 17:52:53 -- host/discovery.sh@138 -- # get_notification_count 00:27:32.374 17:52:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:32.374 17:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.374 17:52:53 -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 17:52:53 -- host/discovery.sh@74 -- # jq '. | length' 00:27:32.374 17:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.374 17:52:53 -- host/discovery.sh@74 -- # notification_count=2 00:27:32.374 17:52:53 -- host/discovery.sh@75 -- # notify_id=4 00:27:32.374 17:52:53 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:27:32.374 17:52:53 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:32.374 17:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.374 17:52:53 -- common/autotest_common.sh@10 -- # set +x 00:27:33.314 [2024-07-24 17:52:54.725053] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:33.314 [2024-07-24 17:52:54.725070] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:33.314 [2024-07-24 17:52:54.725083] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:33.314 [2024-07-24 17:52:54.854491] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:33.574 [2024-07-24 17:52:54.959417] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:33.574 [2024-07-24 17:52:54.959443] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:33.574 17:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.574 17:52:54 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.574 17:52:54 -- common/autotest_common.sh@640 -- # local es=0 00:27:33.574 17:52:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.574 17:52:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:33.574 17:52:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.574 17:52:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:33.574 17:52:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.574 17:52:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.574 17:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.574 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:27:33.574 request: 00:27:33.574 { 00:27:33.574 "name": "nvme", 00:27:33.574 "trtype": "tcp", 00:27:33.574 "traddr": "10.0.0.2", 00:27:33.574 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:33.574 "adrfam": "ipv4", 00:27:33.574 "trsvcid": "8009", 00:27:33.574 "wait_for_attach": true, 00:27:33.574 "method": "bdev_nvme_start_discovery", 00:27:33.574 "req_id": 1 00:27:33.574 } 00:27:33.574 Got JSON-RPC error response 00:27:33.574 response: 00:27:33.574 { 00:27:33.574 "code": -17, 00:27:33.574 "message": "File exists" 00:27:33.574 } 00:27:33.574 17:52:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:33.574 17:52:54 -- common/autotest_common.sh@643 -- # es=1 00:27:33.574 17:52:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:33.574 17:52:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:33.574 17:52:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:33.574 17:52:54 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:27:33.574 17:52:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:33.574 17:52:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:33.574 17:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.574 17:52:54 -- host/discovery.sh@67 -- # sort 00:27:33.575 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:27:33.575 17:52:54 -- host/discovery.sh@67 -- # xargs 00:27:33.575 17:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.575 17:52:55 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:27:33.575 17:52:55 -- host/discovery.sh@147 -- # get_bdev_list 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:33.575 17:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # sort 00:27:33.575 17:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # xargs 00:27:33.575 17:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.575 17:52:55 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:33.575 17:52:55 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.575 17:52:55 -- common/autotest_common.sh@640 -- # local es=0 00:27:33.575 17:52:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.575 17:52:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:33.575 17:52:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.575 17:52:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:33.575 17:52:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.575 17:52:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:33.575 17:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.575 17:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:33.575 request: 00:27:33.575 { 00:27:33.575 "name": "nvme_second", 00:27:33.575 "trtype": "tcp", 00:27:33.575 "traddr": "10.0.0.2", 00:27:33.575 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:33.575 "adrfam": "ipv4", 00:27:33.575 "trsvcid": "8009", 00:27:33.575 "wait_for_attach": true, 00:27:33.575 "method": "bdev_nvme_start_discovery", 00:27:33.575 "req_id": 1 00:27:33.575 } 00:27:33.575 Got JSON-RPC error response 00:27:33.575 response: 00:27:33.575 { 00:27:33.575 "code": -17, 00:27:33.575 "message": "File exists" 00:27:33.575 } 00:27:33.575 17:52:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:33.575 17:52:55 -- common/autotest_common.sh@643 -- # es=1 00:27:33.575 17:52:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:33.575 17:52:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:33.575 17:52:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:33.575 17:52:55 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:27:33.575 17:52:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:33.575 17:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.575 17:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:33.575 17:52:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:33.575 17:52:55 -- host/discovery.sh@67 -- # sort 00:27:33.575 17:52:55 -- host/discovery.sh@67 -- # xargs 00:27:33.575 17:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.575 17:52:55 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:27:33.575 17:52:55 -- host/discovery.sh@153 -- # get_bdev_list 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.575 17:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # xargs 00:27:33.575 17:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:33.575 17:52:55 -- host/discovery.sh@55 -- # sort 00:27:33.835 17:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.835 17:52:55 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:33.835 17:52:55 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:33.835 17:52:55 -- common/autotest_common.sh@640 -- # local es=0 00:27:33.835 17:52:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:33.835 17:52:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:33.835 17:52:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.835 17:52:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:33.835 17:52:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.835 17:52:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:33.835 17:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.835 17:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:34.775 [2024-07-24 17:52:56.204386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-07-24 17:52:56.204852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.775 [2024-07-24 17:52:56.204873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe78960 with addr=10.0.0.2, port=8010 00:27:34.775 [2024-07-24 17:52:56.204889] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:34.775 [2024-07-24 17:52:56.204899] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:34.775 [2024-07-24 17:52:56.204908] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:35.714 [2024-07-24 17:52:57.206759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-24 17:52:57.207243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.715 [2024-07-24 17:52:57.207264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe78960 with addr=10.0.0.2, port=8010 00:27:35.715 [2024-07-24 17:52:57.207278] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:35.715 [2024-07-24 17:52:57.207287] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:35.715 [2024-07-24 17:52:57.207296] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:36.655 [2024-07-24 17:52:58.208725] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:36.655 request: 00:27:36.655 { 00:27:36.655 "name": "nvme_second", 00:27:36.655 "trtype": "tcp", 00:27:36.655 "traddr": "10.0.0.2", 00:27:36.655 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:36.655 "adrfam": "ipv4", 00:27:36.655 "trsvcid": "8010", 00:27:36.655 "attach_timeout_ms": 3000, 00:27:36.655 "method": "bdev_nvme_start_discovery", 00:27:36.655 "req_id": 1 00:27:36.655 } 00:27:36.655 Got JSON-RPC error response 00:27:36.655 response: 00:27:36.655 { 00:27:36.655 "code": -110, 00:27:36.655 "message": "Connection timed out" 00:27:36.655 } 00:27:36.655 17:52:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:36.655 17:52:58 -- common/autotest_common.sh@643 -- # es=1 00:27:36.655 17:52:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:36.655 17:52:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:36.655 17:52:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:36.655 17:52:58 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:27:36.655 17:52:58 -- host/discovery.sh@67 -- # xargs 00:27:36.655 17:52:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:36.655 17:52:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:36.655 17:52:58 -- host/discovery.sh@67 -- # sort 00:27:36.655 17:52:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:36.655 17:52:58 -- common/autotest_common.sh@10 -- # set +x 00:27:36.655 17:52:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:36.914 17:52:58 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:27:36.914 17:52:58 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:27:36.914 17:52:58 -- host/discovery.sh@162 -- # kill 756569 00:27:36.914 17:52:58 -- host/discovery.sh@163 -- # nvmftestfini 00:27:36.914 17:52:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:36.914 17:52:58 -- nvmf/common.sh@116 -- # sync 00:27:36.914 17:52:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:36.914 17:52:58 -- nvmf/common.sh@119 -- # set +e 00:27:36.914 17:52:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:36.914 17:52:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:36.915 rmmod nvme_tcp 00:27:36.915 rmmod nvme_fabrics 00:27:36.915 rmmod nvme_keyring 00:27:36.915 17:52:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:36.915 17:52:58 -- nvmf/common.sh@123 -- # set -e 00:27:36.915 17:52:58 -- nvmf/common.sh@124 -- # return 0 00:27:36.915 17:52:58 -- nvmf/common.sh@477 -- # '[' -n 756324 ']' 00:27:36.915 17:52:58 -- nvmf/common.sh@478 -- # killprocess 756324 00:27:36.915 17:52:58 -- common/autotest_common.sh@926 -- # '[' -z 756324 ']' 00:27:36.915 17:52:58 -- common/autotest_common.sh@930 -- # kill -0 756324 00:27:36.915 17:52:58 -- common/autotest_common.sh@931 -- # uname 00:27:36.915 17:52:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:36.915 17:52:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 756324 00:27:36.915 17:52:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:36.915 17:52:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:36.915 17:52:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 756324' 00:27:36.915 killing process with pid 756324 00:27:36.915 17:52:58 -- common/autotest_common.sh@945 -- # kill 756324 00:27:36.915 17:52:58 -- common/autotest_common.sh@950 -- # wait 756324 00:27:37.174 17:52:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:37.174 17:52:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:37.174 17:52:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:37.174 17:52:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.174 17:52:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:37.174 17:52:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.174 17:52:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.174 17:52:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.087 17:53:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:39.087 00:27:39.087 real 0m20.801s 00:27:39.087 user 0m27.784s 00:27:39.087 sys 0m5.710s 00:27:39.087 17:53:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.087 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:27:39.087 ************************************ 00:27:39.087 END TEST nvmf_discovery 00:27:39.087 ************************************ 00:27:39.088 17:53:00 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:39.088 17:53:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:39.088 17:53:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.088 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:27:39.088 ************************************ 00:27:39.088 START TEST nvmf_discovery_remove_ifc 00:27:39.088 ************************************ 00:27:39.088 17:53:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:39.348 * Looking for test storage... 00:27:39.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.348 17:53:00 -- nvmf/common.sh@7 -- # uname -s 00:27:39.348 17:53:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.348 17:53:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.348 17:53:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.348 17:53:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.348 17:53:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.348 17:53:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.348 17:53:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.348 17:53:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.348 17:53:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.348 17:53:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.348 17:53:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.348 17:53:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.348 17:53:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.348 17:53:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.348 17:53:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.348 17:53:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.348 17:53:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.348 17:53:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.348 17:53:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.348 17:53:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 17:53:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 17:53:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 17:53:00 -- paths/export.sh@5 -- # export PATH 00:27:39.348 17:53:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.348 17:53:00 -- nvmf/common.sh@46 -- # : 0 00:27:39.348 17:53:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:39.348 17:53:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:39.348 17:53:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:39.348 17:53:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.348 17:53:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.348 17:53:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:39.348 17:53:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:39.348 17:53:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:39.348 17:53:00 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:39.348 17:53:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:39.348 17:53:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.348 17:53:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:39.348 17:53:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:39.348 17:53:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:39.348 17:53:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.348 17:53:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.348 17:53:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.348 17:53:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:39.348 17:53:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:39.348 17:53:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:39.348 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:27:44.640 17:53:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:44.640 17:53:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:44.640 17:53:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:44.640 17:53:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:44.640 17:53:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:44.640 17:53:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:44.640 17:53:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:44.640 17:53:06 -- nvmf/common.sh@294 -- # net_devs=() 00:27:44.640 17:53:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:44.640 17:53:06 -- nvmf/common.sh@295 -- # e810=() 00:27:44.640 17:53:06 -- nvmf/common.sh@295 -- # local -ga e810 00:27:44.640 17:53:06 -- nvmf/common.sh@296 -- # x722=() 00:27:44.640 17:53:06 -- nvmf/common.sh@296 -- # local -ga x722 00:27:44.640 17:53:06 -- nvmf/common.sh@297 -- # mlx=() 00:27:44.640 17:53:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:44.640 17:53:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.640 17:53:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:44.640 17:53:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:44.640 17:53:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.640 17:53:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:44.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:44.640 17:53:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.640 17:53:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:44.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:44.640 17:53:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.640 17:53:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.640 17:53:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.640 17:53:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:44.640 Found net devices under 0000:86:00.0: cvl_0_0 00:27:44.640 17:53:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.640 17:53:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.640 17:53:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.640 17:53:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.640 17:53:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:44.640 Found net devices under 0000:86:00.1: cvl_0_1 00:27:44.640 17:53:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.640 17:53:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:44.640 17:53:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:44.640 17:53:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:44.640 17:53:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.640 17:53:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.640 17:53:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.640 17:53:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:44.640 17:53:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.640 17:53:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.640 17:53:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:44.640 17:53:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.640 17:53:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.640 17:53:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:44.640 17:53:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:44.640 17:53:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.640 17:53:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.640 17:53:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.640 17:53:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.640 17:53:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:44.640 17:53:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.919 17:53:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.919 17:53:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.919 17:53:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:44.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:44.919 00:27:44.919 --- 10.0.0.2 ping statistics --- 00:27:44.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.919 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:44.919 17:53:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:27:44.920 00:27:44.920 --- 10.0.0.1 ping statistics --- 00:27:44.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.920 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:27:44.920 17:53:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.920 17:53:06 -- nvmf/common.sh@410 -- # return 0 00:27:44.920 17:53:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:44.920 17:53:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.920 17:53:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:44.920 17:53:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:44.920 17:53:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.920 17:53:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:44.920 17:53:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:44.920 17:53:06 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:44.920 17:53:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:44.920 17:53:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:44.920 17:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.920 17:53:06 -- nvmf/common.sh@469 -- # nvmfpid=762138 00:27:44.920 17:53:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:44.920 17:53:06 -- nvmf/common.sh@470 -- # waitforlisten 762138 00:27:44.920 17:53:06 -- common/autotest_common.sh@819 -- # '[' -z 762138 ']' 00:27:44.920 17:53:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.920 17:53:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:44.921 17:53:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.921 17:53:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:44.921 17:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.921 [2024-07-24 17:53:06.436267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:44.921 [2024-07-24 17:53:06.436310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.921 [2024-07-24 17:53:06.492294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.181 [2024-07-24 17:53:06.563093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:45.181 [2024-07-24 17:53:06.563203] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.181 [2024-07-24 17:53:06.563211] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.181 [2024-07-24 17:53:06.563217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.181 [2024-07-24 17:53:06.563234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.752 17:53:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:45.752 17:53:07 -- common/autotest_common.sh@852 -- # return 0 00:27:45.752 17:53:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:45.752 17:53:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:45.752 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:45.752 17:53:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.752 17:53:07 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:45.752 17:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:45.753 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:45.753 [2024-07-24 17:53:07.265772] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.753 [2024-07-24 17:53:07.273917] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:45.753 null0 00:27:45.753 [2024-07-24 17:53:07.305900] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.753 17:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:45.753 17:53:07 -- host/discovery_remove_ifc.sh@59 -- # hostpid=762174 00:27:45.753 17:53:07 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 762174 /tmp/host.sock 00:27:45.753 17:53:07 -- common/autotest_common.sh@819 -- # '[' -z 762174 ']' 00:27:45.753 17:53:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:27:45.753 17:53:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:45.753 17:53:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:45.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:45.753 17:53:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:45.753 17:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:45.753 17:53:07 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:46.013 [2024-07-24 17:53:07.370132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:46.013 [2024-07-24 17:53:07.370178] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762174 ] 00:27:46.013 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.013 [2024-07-24 17:53:07.423767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.013 [2024-07-24 17:53:07.507702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:46.013 [2024-07-24 17:53:07.507830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.584 17:53:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:46.584 17:53:08 -- common/autotest_common.sh@852 -- # return 0 00:27:46.584 17:53:08 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.584 17:53:08 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:46.584 17:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.584 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:46.584 17:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.584 17:53:08 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:46.584 17:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.584 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:46.845 17:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.845 17:53:08 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:46.845 17:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.845 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 [2024-07-24 17:53:09.306302] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:47.784 [2024-07-24 17:53:09.306328] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:47.784 [2024-07-24 17:53:09.306344] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:48.044 [2024-07-24 17:53:09.393598] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:48.044 [2024-07-24 17:53:09.578326] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:48.044 [2024-07-24 17:53:09.578365] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:48.044 [2024-07-24 17:53:09.578386] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:48.044 [2024-07-24 17:53:09.578400] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:48.044 [2024-07-24 17:53:09.578420] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:48.044 17:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.044 17:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.044 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:48.044 [2024-07-24 17:53:09.584790] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1610980 was disconnected and freed. delete nvme_qpair. 00:27:48.044 17:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:48.044 17:53:09 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.304 17:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.304 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:48.304 17:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:48.304 17:53:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.371 17:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.371 17:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.371 17:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.371 17:53:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.309 17:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.309 17:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.309 17:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:50.309 17:53:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.690 17:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.690 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.690 17:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.690 17:53:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.644 17:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.644 17:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.644 17:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.644 17:53:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.583 17:53:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.583 17:53:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.583 17:53:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.583 17:53:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.583 17:53:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.583 17:53:14 -- common/autotest_common.sh@10 -- # set +x 00:27:53.583 17:53:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.583 17:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.583 [2024-07-24 17:53:15.019495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:53.583 [2024-07-24 17:53:15.019540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.583 [2024-07-24 17:53:15.019560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.583 [2024-07-24 17:53:15.019572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.583 [2024-07-24 17:53:15.019582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.583 [2024-07-24 17:53:15.019592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.583 [2024-07-24 17:53:15.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.583 [2024-07-24 17:53:15.019612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.583 [2024-07-24 17:53:15.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.583 [2024-07-24 17:53:15.019631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.583 [2024-07-24 17:53:15.019641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.583 [2024-07-24 17:53:15.019651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7c50 is same with the state(5) to be set 00:27:53.583 [2024-07-24 17:53:15.029516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7c50 (9): Bad file descriptor 00:27:53.583 [2024-07-24 17:53:15.039555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.583 17:53:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:53.583 17:53:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:54.522 17:53:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.522 17:53:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.522 17:53:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.522 17:53:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.522 17:53:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.522 17:53:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.522 17:53:16 -- common/autotest_common.sh@10 -- # set +x 00:27:54.522 [2024-07-24 17:53:16.098060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:55.903 [2024-07-24 17:53:17.122120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:55.903 [2024-07-24 17:53:17.122162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7c50 with addr=10.0.0.2, port=4420 00:27:55.903 [2024-07-24 17:53:17.122182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7c50 is same with the state(5) to be set 00:27:55.903 [2024-07-24 17:53:17.122211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:55.903 [2024-07-24 17:53:17.122227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:55.903 [2024-07-24 17:53:17.122241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:55.903 [2024-07-24 17:53:17.122257] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:55.903 [2024-07-24 17:53:17.122654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7c50 (9): Bad file descriptor 00:27:55.903 [2024-07-24 17:53:17.122688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.903 [2024-07-24 17:53:17.122723] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:55.903 [2024-07-24 17:53:17.122756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.903 [2024-07-24 17:53:17.122779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-07-24 17:53:17.122797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.903 [2024-07-24 17:53:17.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-07-24 17:53:17.122828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.903 [2024-07-24 17:53:17.122844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-07-24 17:53:17.122859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.903 [2024-07-24 17:53:17.122874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-07-24 17:53:17.122890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.903 [2024-07-24 17:53:17.122905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.903 [2024-07-24 17:53:17.122919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:55.903 [2024-07-24 17:53:17.123216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7140 (9): Bad file descriptor 00:27:55.903 [2024-07-24 17:53:17.124229] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:55.903 [2024-07-24 17:53:17.124246] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:55.903 17:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.903 17:53:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:55.903 17:53:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.843 17:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.843 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:27:56.843 17:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.843 17:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.843 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.843 17:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:56.843 17:53:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.781 [2024-07-24 17:53:19.180930] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:57.782 [2024-07-24 17:53:19.180948] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:57.782 [2024-07-24 17:53:19.180966] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:57.782 [2024-07-24 17:53:19.311374] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.782 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.782 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.782 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:57.782 17:53:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.041 [2024-07-24 17:53:19.496468] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:58.041 [2024-07-24 17:53:19.496502] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:58.041 [2024-07-24 17:53:19.496522] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:58.041 [2024-07-24 17:53:19.496535] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:58.041 [2024-07-24 17:53:19.496543] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:58.041 [2024-07-24 17:53:19.500301] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15e66e0 was disconnected and freed. delete nvme_qpair. 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.980 17:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.980 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.980 17:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:58.980 17:53:20 -- host/discovery_remove_ifc.sh@90 -- # killprocess 762174 00:27:58.980 17:53:20 -- common/autotest_common.sh@926 -- # '[' -z 762174 ']' 00:27:58.980 17:53:20 -- common/autotest_common.sh@930 -- # kill -0 762174 00:27:58.980 17:53:20 -- common/autotest_common.sh@931 -- # uname 00:27:58.980 17:53:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:58.980 17:53:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 762174 00:27:58.980 17:53:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:58.980 17:53:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:58.980 17:53:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 762174' 00:27:58.980 killing process with pid 762174 00:27:58.980 17:53:20 -- common/autotest_common.sh@945 -- # kill 762174 00:27:58.980 17:53:20 -- common/autotest_common.sh@950 -- # wait 762174 00:27:59.239 17:53:20 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:59.239 17:53:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:59.239 17:53:20 -- nvmf/common.sh@116 -- # sync 00:27:59.239 17:53:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:59.239 17:53:20 -- nvmf/common.sh@119 -- # set +e 00:27:59.239 17:53:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:59.239 17:53:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:59.239 rmmod nvme_tcp 00:27:59.239 rmmod nvme_fabrics 00:27:59.239 rmmod nvme_keyring 00:27:59.239 17:53:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:59.239 17:53:20 -- nvmf/common.sh@123 -- # set -e 00:27:59.239 17:53:20 -- nvmf/common.sh@124 -- # return 0 00:27:59.239 17:53:20 -- nvmf/common.sh@477 -- # '[' -n 762138 ']' 00:27:59.239 17:53:20 -- nvmf/common.sh@478 -- # killprocess 762138 00:27:59.239 17:53:20 -- common/autotest_common.sh@926 -- # '[' -z 762138 ']' 00:27:59.239 17:53:20 -- common/autotest_common.sh@930 -- # kill -0 762138 00:27:59.239 17:53:20 -- common/autotest_common.sh@931 -- # uname 00:27:59.239 17:53:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:59.239 17:53:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 762138 00:27:59.239 17:53:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:59.239 17:53:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:59.239 17:53:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 762138' 00:27:59.239 killing process with pid 762138 00:27:59.239 17:53:20 -- common/autotest_common.sh@945 -- # kill 762138 00:27:59.239 17:53:20 -- common/autotest_common.sh@950 -- # wait 762138 00:27:59.499 17:53:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:59.499 17:53:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:59.499 17:53:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:59.499 17:53:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.499 17:53:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:59.499 17:53:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.499 17:53:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.499 17:53:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.039 17:53:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:02.039 00:28:02.039 real 0m22.351s 00:28:02.039 user 0m27.829s 00:28:02.039 sys 0m5.396s 00:28:02.039 17:53:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.039 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.039 ************************************ 00:28:02.039 END TEST nvmf_discovery_remove_ifc 00:28:02.039 ************************************ 00:28:02.039 17:53:23 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:02.039 17:53:23 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.039 17:53:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:02.039 17:53:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.039 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.039 ************************************ 00:28:02.039 START TEST nvmf_digest 00:28:02.039 ************************************ 00:28:02.039 17:53:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.039 * Looking for test storage... 00:28:02.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.039 17:53:23 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.039 17:53:23 -- nvmf/common.sh@7 -- # uname -s 00:28:02.039 17:53:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.039 17:53:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.039 17:53:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.039 17:53:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.039 17:53:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.039 17:53:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.039 17:53:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.039 17:53:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.039 17:53:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.039 17:53:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.039 17:53:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:02.039 17:53:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:02.039 17:53:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.039 17:53:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.039 17:53:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.039 17:53:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.039 17:53:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.039 17:53:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.039 17:53:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.039 17:53:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.040 17:53:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.040 17:53:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.040 17:53:23 -- paths/export.sh@5 -- # export PATH 00:28:02.040 17:53:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.040 17:53:23 -- nvmf/common.sh@46 -- # : 0 00:28:02.040 17:53:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:02.040 17:53:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:02.040 17:53:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:02.040 17:53:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.040 17:53:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.040 17:53:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:02.040 17:53:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:02.040 17:53:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:02.040 17:53:23 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:02.040 17:53:23 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.040 17:53:23 -- host/digest.sh@16 -- # runtime=2 00:28:02.040 17:53:23 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:02.040 17:53:23 -- host/digest.sh@132 -- # nvmftestinit 00:28:02.040 17:53:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:02.040 17:53:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.040 17:53:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:02.040 17:53:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:02.040 17:53:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:02.040 17:53:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.040 17:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.040 17:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.040 17:53:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:02.040 17:53:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:02.040 17:53:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:02.040 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 17:53:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:07.323 17:53:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:07.323 17:53:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:07.323 17:53:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:07.323 17:53:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:07.323 17:53:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:07.323 17:53:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:07.323 17:53:28 -- nvmf/common.sh@294 -- # net_devs=() 00:28:07.323 17:53:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:07.323 17:53:28 -- nvmf/common.sh@295 -- # e810=() 00:28:07.323 17:53:28 -- nvmf/common.sh@295 -- # local -ga e810 00:28:07.323 17:53:28 -- nvmf/common.sh@296 -- # x722=() 00:28:07.323 17:53:28 -- nvmf/common.sh@296 -- # local -ga x722 00:28:07.324 17:53:28 -- nvmf/common.sh@297 -- # mlx=() 00:28:07.324 17:53:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:07.324 17:53:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.324 17:53:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:07.324 17:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:07.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:07.324 17:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:07.324 17:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:07.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:07.324 17:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:07.324 17:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.324 17:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.324 17:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:07.324 Found net devices under 0000:86:00.0: cvl_0_0 00:28:07.324 17:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:07.324 17:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.324 17:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.324 17:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:07.324 Found net devices under 0000:86:00.1: cvl_0_1 00:28:07.324 17:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:07.324 17:53:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:07.324 17:53:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.324 17:53:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.324 17:53:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:07.324 17:53:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.324 17:53:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.324 17:53:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:07.324 17:53:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.324 17:53:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.324 17:53:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:07.324 17:53:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:07.324 17:53:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.324 17:53:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.324 17:53:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.324 17:53:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.324 17:53:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:07.324 17:53:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.324 17:53:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.324 17:53:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.324 17:53:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:07.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:07.324 00:28:07.324 --- 10.0.0.2 ping statistics --- 00:28:07.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.324 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:07.324 17:53:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:28:07.324 00:28:07.324 --- 10.0.0.1 ping statistics --- 00:28:07.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.324 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:07.324 17:53:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.324 17:53:28 -- nvmf/common.sh@410 -- # return 0 00:28:07.324 17:53:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:07.324 17:53:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.324 17:53:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:07.324 17:53:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.324 17:53:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:07.324 17:53:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:07.324 17:53:28 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:07.324 17:53:28 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:07.324 17:53:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:07.324 17:53:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:07.324 17:53:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.324 ************************************ 00:28:07.324 START TEST nvmf_digest_clean 00:28:07.324 ************************************ 00:28:07.324 17:53:28 -- common/autotest_common.sh@1104 -- # run_digest 00:28:07.324 17:53:28 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:07.324 17:53:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:07.324 17:53:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:07.324 17:53:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.324 17:53:28 -- nvmf/common.sh@469 -- # nvmfpid=767889 00:28:07.324 17:53:28 -- nvmf/common.sh@470 -- # waitforlisten 767889 00:28:07.324 17:53:28 -- common/autotest_common.sh@819 -- # '[' -z 767889 ']' 00:28:07.324 17:53:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.324 17:53:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.324 17:53:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.324 17:53:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:07.324 17:53:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.324 17:53:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.324 [2024-07-24 17:53:28.618239] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:07.324 [2024-07-24 17:53:28.618281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.324 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.324 [2024-07-24 17:53:28.675517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.324 [2024-07-24 17:53:28.753357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:07.324 [2024-07-24 17:53:28.753474] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.324 [2024-07-24 17:53:28.753486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.324 [2024-07-24 17:53:28.753494] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.324 [2024-07-24 17:53:28.753514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.894 17:53:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.894 17:53:29 -- common/autotest_common.sh@852 -- # return 0 00:28:07.894 17:53:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:07.894 17:53:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:07.894 17:53:29 -- common/autotest_common.sh@10 -- # set +x 00:28:07.894 17:53:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.894 17:53:29 -- host/digest.sh@120 -- # common_target_config 00:28:07.894 17:53:29 -- host/digest.sh@43 -- # rpc_cmd 00:28:07.894 17:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.894 17:53:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.154 null0 00:28:08.154 [2024-07-24 17:53:29.529715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.154 [2024-07-24 17:53:29.553889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.154 17:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.154 17:53:29 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:08.154 17:53:29 -- host/digest.sh@77 -- # local rw bs qd 00:28:08.154 17:53:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.154 17:53:29 -- host/digest.sh@80 -- # rw=randread 00:28:08.154 17:53:29 -- host/digest.sh@80 -- # bs=4096 00:28:08.154 17:53:29 -- host/digest.sh@80 -- # qd=128 00:28:08.154 17:53:29 -- host/digest.sh@82 -- # bperfpid=768138 00:28:08.154 17:53:29 -- host/digest.sh@83 -- # waitforlisten 768138 /var/tmp/bperf.sock 00:28:08.154 17:53:29 -- common/autotest_common.sh@819 -- # '[' -z 768138 ']' 00:28:08.154 17:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.154 17:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:08.154 17:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.154 17:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:08.154 17:53:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:08.154 17:53:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.154 [2024-07-24 17:53:29.597421] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:08.154 [2024-07-24 17:53:29.597462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768138 ] 00:28:08.154 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.154 [2024-07-24 17:53:29.650372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.154 [2024-07-24 17:53:29.733111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.094 17:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:09.094 17:53:30 -- common/autotest_common.sh@852 -- # return 0 00:28:09.094 17:53:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:09.094 17:53:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:09.094 17:53:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.094 17:53:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.094 17:53:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.353 nvme0n1 00:28:09.353 17:53:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:09.353 17:53:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.612 Running I/O for 2 seconds... 00:28:11.520 00:28:11.520 Latency(us) 00:28:11.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.520 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:11.520 nvme0n1 : 2.00 26241.85 102.51 0.00 0.00 4873.00 2051.56 26670.30 00:28:11.520 =================================================================================================================== 00:28:11.520 Total : 26241.85 102.51 0.00 0.00 4873.00 2051.56 26670.30 00:28:11.520 0 00:28:11.520 17:53:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:11.520 17:53:32 -- host/digest.sh@92 -- # get_accel_stats 00:28:11.520 17:53:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:11.520 17:53:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:11.520 | select(.opcode=="crc32c") 00:28:11.520 | "\(.module_name) \(.executed)"' 00:28:11.520 17:53:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:11.780 17:53:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:11.780 17:53:33 -- host/digest.sh@93 -- # exp_module=software 00:28:11.780 17:53:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:11.780 17:53:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:11.780 17:53:33 -- host/digest.sh@97 -- # killprocess 768138 00:28:11.780 17:53:33 -- common/autotest_common.sh@926 -- # '[' -z 768138 ']' 00:28:11.780 17:53:33 -- common/autotest_common.sh@930 -- # kill -0 768138 00:28:11.780 17:53:33 -- common/autotest_common.sh@931 -- # uname 00:28:11.780 17:53:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:11.780 17:53:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 768138 00:28:11.780 17:53:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:11.780 17:53:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:11.780 17:53:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 768138' 00:28:11.780 killing process with pid 768138 00:28:11.780 17:53:33 -- common/autotest_common.sh@945 -- # kill 768138 00:28:11.780 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.780 00:28:11.780 Latency(us) 00:28:11.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.780 =================================================================================================================== 00:28:11.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.780 17:53:33 -- common/autotest_common.sh@950 -- # wait 768138 00:28:12.040 17:53:33 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:28:12.040 17:53:33 -- host/digest.sh@77 -- # local rw bs qd 00:28:12.040 17:53:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.040 17:53:33 -- host/digest.sh@80 -- # rw=randread 00:28:12.040 17:53:33 -- host/digest.sh@80 -- # bs=131072 00:28:12.040 17:53:33 -- host/digest.sh@80 -- # qd=16 00:28:12.040 17:53:33 -- host/digest.sh@82 -- # bperfpid=768675 00:28:12.040 17:53:33 -- host/digest.sh@83 -- # waitforlisten 768675 /var/tmp/bperf.sock 00:28:12.040 17:53:33 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:12.040 17:53:33 -- common/autotest_common.sh@819 -- # '[' -z 768675 ']' 00:28:12.040 17:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.040 17:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:12.040 17:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.040 17:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:12.040 17:53:33 -- common/autotest_common.sh@10 -- # set +x 00:28:12.040 [2024-07-24 17:53:33.456431] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:12.040 [2024-07-24 17:53:33.456481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768675 ] 00:28:12.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.040 Zero copy mechanism will not be used. 00:28:12.040 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.040 [2024-07-24 17:53:33.512231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.040 [2024-07-24 17:53:33.586480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.980 17:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:12.980 17:53:34 -- common/autotest_common.sh@852 -- # return 0 00:28:12.980 17:53:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:12.980 17:53:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:12.980 17:53:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:12.980 17:53:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.980 17:53:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.548 nvme0n1 00:28:13.548 17:53:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:13.548 17:53:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.548 Zero copy mechanism will not be used. 00:28:13.548 Running I/O for 2 seconds... 00:28:15.456 00:28:15.456 Latency(us) 00:28:15.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.456 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:15.456 nvme0n1 : 2.00 2222.52 277.82 0.00 0.00 7196.18 5727.28 24846.69 00:28:15.456 =================================================================================================================== 00:28:15.456 Total : 2222.52 277.82 0.00 0.00 7196.18 5727.28 24846.69 00:28:15.456 0 00:28:15.457 17:53:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:15.457 17:53:36 -- host/digest.sh@92 -- # get_accel_stats 00:28:15.457 17:53:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:15.457 17:53:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:15.457 | select(.opcode=="crc32c") 00:28:15.457 | "\(.module_name) \(.executed)"' 00:28:15.457 17:53:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:15.716 17:53:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:15.716 17:53:37 -- host/digest.sh@93 -- # exp_module=software 00:28:15.716 17:53:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:15.716 17:53:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:15.716 17:53:37 -- host/digest.sh@97 -- # killprocess 768675 00:28:15.716 17:53:37 -- common/autotest_common.sh@926 -- # '[' -z 768675 ']' 00:28:15.716 17:53:37 -- common/autotest_common.sh@930 -- # kill -0 768675 00:28:15.716 17:53:37 -- common/autotest_common.sh@931 -- # uname 00:28:15.716 17:53:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:15.716 17:53:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 768675 00:28:15.716 17:53:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:15.716 17:53:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:15.716 17:53:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 768675' 00:28:15.716 killing process with pid 768675 00:28:15.716 17:53:37 -- common/autotest_common.sh@945 -- # kill 768675 00:28:15.716 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.716 00:28:15.716 Latency(us) 00:28:15.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.716 =================================================================================================================== 00:28:15.716 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.716 17:53:37 -- common/autotest_common.sh@950 -- # wait 768675 00:28:15.976 17:53:37 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:28:15.976 17:53:37 -- host/digest.sh@77 -- # local rw bs qd 00:28:15.976 17:53:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:15.976 17:53:37 -- host/digest.sh@80 -- # rw=randwrite 00:28:15.976 17:53:37 -- host/digest.sh@80 -- # bs=4096 00:28:15.976 17:53:37 -- host/digest.sh@80 -- # qd=128 00:28:15.976 17:53:37 -- host/digest.sh@82 -- # bperfpid=769332 00:28:15.976 17:53:37 -- host/digest.sh@83 -- # waitforlisten 769332 /var/tmp/bperf.sock 00:28:15.976 17:53:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:15.976 17:53:37 -- common/autotest_common.sh@819 -- # '[' -z 769332 ']' 00:28:15.976 17:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.976 17:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:15.976 17:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.976 17:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:15.976 17:53:37 -- common/autotest_common.sh@10 -- # set +x 00:28:15.976 [2024-07-24 17:53:37.431437] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:15.976 [2024-07-24 17:53:37.431486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769332 ] 00:28:15.976 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.976 [2024-07-24 17:53:37.484167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.976 [2024-07-24 17:53:37.555232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.916 17:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:16.916 17:53:38 -- common/autotest_common.sh@852 -- # return 0 00:28:16.916 17:53:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:16.916 17:53:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:16.916 17:53:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:16.916 17:53:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.916 17:53:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.176 nvme0n1 00:28:17.176 17:53:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:17.176 17:53:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.439 Running I/O for 2 seconds... 00:28:19.346 00:28:19.346 Latency(us) 00:28:19.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.346 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.346 nvme0n1 : 2.00 26861.79 104.93 0.00 0.00 4756.70 2493.22 23820.91 00:28:19.346 =================================================================================================================== 00:28:19.346 Total : 26861.79 104.93 0.00 0.00 4756.70 2493.22 23820.91 00:28:19.346 0 00:28:19.346 17:53:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:19.346 17:53:40 -- host/digest.sh@92 -- # get_accel_stats 00:28:19.346 17:53:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.346 17:53:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.346 | select(.opcode=="crc32c") 00:28:19.346 | "\(.module_name) \(.executed)"' 00:28:19.346 17:53:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:19.606 17:53:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:19.606 17:53:40 -- host/digest.sh@93 -- # exp_module=software 00:28:19.606 17:53:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:19.606 17:53:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:19.606 17:53:40 -- host/digest.sh@97 -- # killprocess 769332 00:28:19.606 17:53:40 -- common/autotest_common.sh@926 -- # '[' -z 769332 ']' 00:28:19.606 17:53:40 -- common/autotest_common.sh@930 -- # kill -0 769332 00:28:19.606 17:53:40 -- common/autotest_common.sh@931 -- # uname 00:28:19.606 17:53:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.606 17:53:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 769332 00:28:19.606 17:53:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:19.606 17:53:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:19.606 17:53:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 769332' 00:28:19.606 killing process with pid 769332 00:28:19.606 17:53:41 -- common/autotest_common.sh@945 -- # kill 769332 00:28:19.606 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.606 00:28:19.606 Latency(us) 00:28:19.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.606 =================================================================================================================== 00:28:19.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.606 17:53:41 -- common/autotest_common.sh@950 -- # wait 769332 00:28:19.866 17:53:41 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:28:19.866 17:53:41 -- host/digest.sh@77 -- # local rw bs qd 00:28:19.866 17:53:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.866 17:53:41 -- host/digest.sh@80 -- # rw=randwrite 00:28:19.866 17:53:41 -- host/digest.sh@80 -- # bs=131072 00:28:19.866 17:53:41 -- host/digest.sh@80 -- # qd=16 00:28:19.866 17:53:41 -- host/digest.sh@82 -- # bperfpid=770039 00:28:19.866 17:53:41 -- host/digest.sh@83 -- # waitforlisten 770039 /var/tmp/bperf.sock 00:28:19.866 17:53:41 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:19.866 17:53:41 -- common/autotest_common.sh@819 -- # '[' -z 770039 ']' 00:28:19.866 17:53:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.866 17:53:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.866 17:53:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.866 17:53:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.866 17:53:41 -- common/autotest_common.sh@10 -- # set +x 00:28:19.866 [2024-07-24 17:53:41.282498] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:19.866 [2024-07-24 17:53:41.282545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770039 ] 00:28:19.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.866 Zero copy mechanism will not be used. 00:28:19.866 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.866 [2024-07-24 17:53:41.333541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.866 [2024-07-24 17:53:41.404453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.807 17:53:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:20.807 17:53:42 -- common/autotest_common.sh@852 -- # return 0 00:28:20.807 17:53:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:20.807 17:53:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:20.807 17:53:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.807 17:53:42 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.807 17:53:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.128 nvme0n1 00:28:21.128 17:53:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:21.128 17:53:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.128 Zero copy mechanism will not be used. 00:28:21.128 Running I/O for 2 seconds... 00:28:23.665 00:28:23.665 Latency(us) 00:28:23.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.665 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:23.665 nvme0n1 : 2.01 1503.18 187.90 0.00 0.00 10617.51 7522.39 38979.67 00:28:23.665 =================================================================================================================== 00:28:23.665 Total : 1503.18 187.90 0.00 0.00 10617.51 7522.39 38979.67 00:28:23.665 0 00:28:23.665 17:53:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:23.665 17:53:44 -- host/digest.sh@92 -- # get_accel_stats 00:28:23.665 17:53:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.665 17:53:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.665 17:53:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.665 | select(.opcode=="crc32c") 00:28:23.665 | "\(.module_name) \(.executed)"' 00:28:23.665 17:53:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:23.665 17:53:44 -- host/digest.sh@93 -- # exp_module=software 00:28:23.665 17:53:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:23.665 17:53:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.665 17:53:44 -- host/digest.sh@97 -- # killprocess 770039 00:28:23.665 17:53:44 -- common/autotest_common.sh@926 -- # '[' -z 770039 ']' 00:28:23.665 17:53:44 -- common/autotest_common.sh@930 -- # kill -0 770039 00:28:23.665 17:53:44 -- common/autotest_common.sh@931 -- # uname 00:28:23.665 17:53:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:23.665 17:53:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 770039 00:28:23.665 17:53:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:23.665 17:53:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:23.665 17:53:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 770039' 00:28:23.665 killing process with pid 770039 00:28:23.665 17:53:44 -- common/autotest_common.sh@945 -- # kill 770039 00:28:23.665 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.665 00:28:23.665 Latency(us) 00:28:23.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.665 =================================================================================================================== 00:28:23.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.665 17:53:44 -- common/autotest_common.sh@950 -- # wait 770039 00:28:23.665 17:53:45 -- host/digest.sh@126 -- # killprocess 767889 00:28:23.665 17:53:45 -- common/autotest_common.sh@926 -- # '[' -z 767889 ']' 00:28:23.665 17:53:45 -- common/autotest_common.sh@930 -- # kill -0 767889 00:28:23.665 17:53:45 -- common/autotest_common.sh@931 -- # uname 00:28:23.665 17:53:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:23.665 17:53:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 767889 00:28:23.665 17:53:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:23.665 17:53:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:23.665 17:53:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 767889' 00:28:23.665 killing process with pid 767889 00:28:23.665 17:53:45 -- common/autotest_common.sh@945 -- # kill 767889 00:28:23.665 17:53:45 -- common/autotest_common.sh@950 -- # wait 767889 00:28:23.925 00:28:23.925 real 0m16.796s 00:28:23.925 user 0m33.025s 00:28:23.925 sys 0m3.476s 00:28:23.925 17:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.925 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.925 ************************************ 00:28:23.925 END TEST nvmf_digest_clean 00:28:23.925 ************************************ 00:28:23.925 17:53:45 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:28:23.925 17:53:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:23.925 17:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:23.925 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.925 ************************************ 00:28:23.925 START TEST nvmf_digest_error 00:28:23.925 ************************************ 00:28:23.925 17:53:45 -- common/autotest_common.sh@1104 -- # run_digest_error 00:28:23.925 17:53:45 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:28:23.925 17:53:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:23.925 17:53:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:23.925 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.925 17:53:45 -- nvmf/common.sh@469 -- # nvmfpid=770773 00:28:23.925 17:53:45 -- nvmf/common.sh@470 -- # waitforlisten 770773 00:28:23.925 17:53:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:23.925 17:53:45 -- common/autotest_common.sh@819 -- # '[' -z 770773 ']' 00:28:23.925 17:53:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.925 17:53:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:23.925 17:53:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.925 17:53:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:23.925 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.925 [2024-07-24 17:53:45.455825] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:23.925 [2024-07-24 17:53:45.455870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.925 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.925 [2024-07-24 17:53:45.512268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.185 [2024-07-24 17:53:45.590078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.185 [2024-07-24 17:53:45.590192] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.185 [2024-07-24 17:53:45.590204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.185 [2024-07-24 17:53:45.590212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.186 [2024-07-24 17:53:45.590232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.755 17:53:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:24.755 17:53:46 -- common/autotest_common.sh@852 -- # return 0 00:28:24.755 17:53:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:24.755 17:53:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:24.755 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.755 17:53:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.755 17:53:46 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:24.755 17:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.755 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.755 [2024-07-24 17:53:46.284300] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:24.755 17:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.755 17:53:46 -- host/digest.sh@104 -- # common_target_config 00:28:24.755 17:53:46 -- host/digest.sh@43 -- # rpc_cmd 00:28:24.755 17:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.755 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:28:25.016 null0 00:28:25.016 [2024-07-24 17:53:46.374302] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.016 [2024-07-24 17:53:46.398474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.016 17:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.016 17:53:46 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:28:25.016 17:53:46 -- host/digest.sh@54 -- # local rw bs qd 00:28:25.016 17:53:46 -- host/digest.sh@56 -- # rw=randread 00:28:25.016 17:53:46 -- host/digest.sh@56 -- # bs=4096 00:28:25.016 17:53:46 -- host/digest.sh@56 -- # qd=128 00:28:25.016 17:53:46 -- host/digest.sh@58 -- # bperfpid=770918 00:28:25.016 17:53:46 -- host/digest.sh@60 -- # waitforlisten 770918 /var/tmp/bperf.sock 00:28:25.016 17:53:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:25.016 17:53:46 -- common/autotest_common.sh@819 -- # '[' -z 770918 ']' 00:28:25.016 17:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.016 17:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.016 17:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.016 17:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.016 17:53:46 -- common/autotest_common.sh@10 -- # set +x 00:28:25.016 [2024-07-24 17:53:46.444459] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:25.016 [2024-07-24 17:53:46.444497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770918 ] 00:28:25.016 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.016 [2024-07-24 17:53:46.497187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.016 [2024-07-24 17:53:46.574910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.955 17:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:25.955 17:53:47 -- common/autotest_common.sh@852 -- # return 0 00:28:25.955 17:53:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.955 17:53:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.955 17:53:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.955 17:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.955 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.956 17:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.956 17:53:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.956 17:53:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.215 nvme0n1 00:28:26.215 17:53:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:26.215 17:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.215 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:28:26.215 17:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.215 17:53:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.215 17:53:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.215 Running I/O for 2 seconds... 00:28:26.215 [2024-07-24 17:53:47.803797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.215 [2024-07-24 17:53:47.803833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.215 [2024-07-24 17:53:47.803843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.215 [2024-07-24 17:53:47.813185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.215 [2024-07-24 17:53:47.813219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.215 [2024-07-24 17:53:47.813228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.822537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.822560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.822568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.830776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.830797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.830805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.839910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.839931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.839939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.848537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.848557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.856976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.856998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.857006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.865603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.865624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.865632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.874718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.874747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.883399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.883419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.883427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.475 [2024-07-24 17:53:47.892182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.475 [2024-07-24 17:53:47.892202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.475 [2024-07-24 17:53:47.892210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.900856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.900876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.900884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.909685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.909705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.909713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.918234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.918262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.927356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.927376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.927384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.935737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.944553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.944573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.944581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.953002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.953022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.953030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.962054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.962075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.962086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.970312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.970332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.970341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.979292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.979312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.979320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.987714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.987734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.987742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:47.996748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:47.996767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:47.996776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.005237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.005257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.005265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.013705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.013725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.022666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.022685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.022694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.030993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.031013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.031022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.039913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.039933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.048911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.048932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.048939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.059225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.059252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.476 [2024-07-24 17:53:48.067388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.476 [2024-07-24 17:53:48.067408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.476 [2024-07-24 17:53:48.067416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.079592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.079613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.079621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.091420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.091439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.091448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.099011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.099030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.099038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.110997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.111017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.111025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.119095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.119114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.119126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.131000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.131019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.141851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.141870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.141878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.152905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.152924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.152932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.166277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.166296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.166304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.174433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.174453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.174460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.183633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.183653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.737 [2024-07-24 17:53:48.183660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.737 [2024-07-24 17:53:48.192098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.737 [2024-07-24 17:53:48.192118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.192126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.200966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.200985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.200993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.209456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.209479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.209488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.218433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.218453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.218461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.227171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.227190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.227198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.235689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.235717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.244412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.244431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.253472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.253492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.253500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.264556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.264576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.264583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.275075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.275094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.275102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.284541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.284560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.284568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.292828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.292847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.292855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.301337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.301356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.301364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.313006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.313025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.313034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.322370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.322389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.738 [2024-07-24 17:53:48.332904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.738 [2024-07-24 17:53:48.332924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.738 [2024-07-24 17:53:48.332932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.998 [2024-07-24 17:53:48.343554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.998 [2024-07-24 17:53:48.343574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.998 [2024-07-24 17:53:48.343582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.998 [2024-07-24 17:53:48.351986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.998 [2024-07-24 17:53:48.352006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.998 [2024-07-24 17:53:48.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.998 [2024-07-24 17:53:48.361647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.998 [2024-07-24 17:53:48.361666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.998 [2024-07-24 17:53:48.361674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.371122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.371142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.371153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.382202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.382221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.382230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.391922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.391941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.391948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.400151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.400171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.400178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.413013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.413039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.424386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.424406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.424414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.432938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.432958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.432966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.443252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.443271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.443280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.451366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.451386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.451394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.464183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.464202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.464210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.475818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.475838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.475846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.484867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.484886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.495470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.495490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.495498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.505847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.505866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.505874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.514100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.514119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.514127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.523589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.523616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.533035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.533058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.533066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.544368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.544386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.544397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.553216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.553235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.553242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.563055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.563074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.563082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.573905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.573924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.573932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.585269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.585288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.585295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.999 [2024-07-24 17:53:48.594037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:26.999 [2024-07-24 17:53:48.594061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.999 [2024-07-24 17:53:48.594069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.604846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.604866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.604874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.615192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.615212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.615219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.624556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.624575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.624583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.634773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.634795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.634803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.645026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.645050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.645058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.655542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.655561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.665352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.665371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.665379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.672984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.673004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.673012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.685140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.685159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.685167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.696209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.696235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.705002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.705022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.705030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.712745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.712766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.712774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.723111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.723133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.723141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.732422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.732442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.732450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.747303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.747322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.747330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.756461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.756481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.261 [2024-07-24 17:53:48.765691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.261 [2024-07-24 17:53:48.765710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.261 [2024-07-24 17:53:48.765718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.774297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.774316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.774324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.784637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.784656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.784664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.792976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.793002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.802169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.802200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.811890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.811910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.811919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.821149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.821169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.821177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.830364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.830385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.830393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.839170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.839190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.839197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.847840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.847860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.847868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.262 [2024-07-24 17:53:48.856573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.262 [2024-07-24 17:53:48.856594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.262 [2024-07-24 17:53:48.856602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.865829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.865858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.874392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.874412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.874420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.882945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.882967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.882975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.892047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.892066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.892073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.900678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.900699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.900707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.909168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.909188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.909196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.918116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.521 [2024-07-24 17:53:48.918135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.521 [2024-07-24 17:53:48.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.521 [2024-07-24 17:53:48.926608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.926627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.935201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.935219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.935227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.944241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.944261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.944268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.952486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.952505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.952516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.961426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.961444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.961452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.970103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.970122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.970130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.978360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.978379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.978387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.987494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.987513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.987521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:48.995994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:48.996013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:48.996021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.004632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.004651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.004659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.013109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.013127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.022184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.022204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.022212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.030643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.030668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.030677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.039616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.039636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.039643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.048089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.048108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.048116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.057164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.057185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.057193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.065635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.065655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.065663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.073956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.073976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.073984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.083087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.083115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.091845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.091866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.091874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.100984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.101005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.101013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.522 [2024-07-24 17:53:49.109296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.522 [2024-07-24 17:53:49.109316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.522 [2024-07-24 17:53:49.109324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.782 [2024-07-24 17:53:49.118563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.782 [2024-07-24 17:53:49.118584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.782 [2024-07-24 17:53:49.118592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.782 [2024-07-24 17:53:49.127070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.782 [2024-07-24 17:53:49.127090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.782 [2024-07-24 17:53:49.127098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.782 [2024-07-24 17:53:49.136241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.782 [2024-07-24 17:53:49.136261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.782 [2024-07-24 17:53:49.136268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.144688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.144708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.144716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.153245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.153265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.153273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.162248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.162267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.162275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.170495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.170515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.170523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.178893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.178913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.178923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.187980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.188000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.188008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.196680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.196700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.196708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.205448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.205468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.205476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.213841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.213861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.213869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.222730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.222750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.222758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.231347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.231367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.231375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.239700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.239719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.248620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.248640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.248647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.257215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.257239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.257247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.266330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.266351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.266359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.275176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.275196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.275204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.283859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.283879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.283887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.293370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.293389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.293397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.302147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.302167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.302175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.310973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.310992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.311001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.321077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.331039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.331066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.331075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.341296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.341318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.341327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.350380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.350400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.350408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.359290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.359310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.359318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.368744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.783 [2024-07-24 17:53:49.368764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.783 [2024-07-24 17:53:49.368773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.783 [2024-07-24 17:53:49.378745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:27.784 [2024-07-24 17:53:49.378765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-07-24 17:53:49.378773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.386806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.386826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.386834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.395515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.395535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.395544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.404255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.404275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.404283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.412679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.412702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.412710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.421686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.421706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.421714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.430154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.430173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.430181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.439284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.439303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.439312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.447636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.447655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.447663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.456093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.456112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.456120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.465266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.465286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.473578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.473598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.473606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.482713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.482732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.482740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.490900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.490919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.490927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.499481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.499500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.499507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.508697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.508716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.508723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.516891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.516910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.516917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.526123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.526143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.534587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.534606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.534613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.543532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.543552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.543560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.552087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.552106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.561028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.561052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.561063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.569443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.569462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.569470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.577879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.577898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.577906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.586739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.586758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.586766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.595543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.595562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.595569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.603951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.044 [2024-07-24 17:53:49.603970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.044 [2024-07-24 17:53:49.603978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.044 [2024-07-24 17:53:49.612902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.045 [2024-07-24 17:53:49.612921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-07-24 17:53:49.612929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.045 [2024-07-24 17:53:49.621312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.045 [2024-07-24 17:53:49.621332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-07-24 17:53:49.621340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.045 [2024-07-24 17:53:49.630268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.045 [2024-07-24 17:53:49.630287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-07-24 17:53:49.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.045 [2024-07-24 17:53:49.639032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.045 [2024-07-24 17:53:49.639059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-07-24 17:53:49.639068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.304 [2024-07-24 17:53:49.647867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.304 [2024-07-24 17:53:49.647887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.304 [2024-07-24 17:53:49.647895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.304 [2024-07-24 17:53:49.656538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.656557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.656565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.665397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.665416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.665424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.673967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.673986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.673994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.682888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.682907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.682915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.691273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.691292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.691300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.699790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.699809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.699816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.708729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.708748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.708756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.717090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.717109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.717117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.726183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.726203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.726210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.734708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.734728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.743546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.743566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.743574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.752014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.752034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.752047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.760865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.760884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.760892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.769414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.769434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.769441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 [2024-07-24 17:53:49.778413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22d99c0) 00:28:28.305 [2024-07-24 17:53:49.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.305 [2024-07-24 17:53:49.778439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.305 00:28:28.305 Latency(us) 00:28:28.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.305 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:28.305 nvme0n1 : 2.00 27595.31 107.79 0.00 0.00 4633.48 2364.99 21655.37 00:28:28.305 =================================================================================================================== 00:28:28.305 Total : 27595.31 107.79 0.00 0.00 4633.48 2364.99 21655.37 00:28:28.305 0 00:28:28.305 17:53:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.305 17:53:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.305 | .driver_specific 00:28:28.305 | .nvme_error 00:28:28.305 | .status_code 00:28:28.305 | .command_transient_transport_error' 00:28:28.305 17:53:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.305 17:53:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.565 17:53:49 -- host/digest.sh@71 -- # (( 216 > 0 )) 00:28:28.565 17:53:49 -- host/digest.sh@73 -- # killprocess 770918 00:28:28.565 17:53:49 -- common/autotest_common.sh@926 -- # '[' -z 770918 ']' 00:28:28.565 17:53:49 -- common/autotest_common.sh@930 -- # kill -0 770918 00:28:28.565 17:53:49 -- common/autotest_common.sh@931 -- # uname 00:28:28.565 17:53:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:28.565 17:53:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 770918 00:28:28.565 17:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:28.565 17:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:28.565 17:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 770918' 00:28:28.565 killing process with pid 770918 00:28:28.565 17:53:50 -- common/autotest_common.sh@945 -- # kill 770918 00:28:28.565 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.565 00:28:28.565 Latency(us) 00:28:28.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.565 =================================================================================================================== 00:28:28.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.565 17:53:50 -- common/autotest_common.sh@950 -- # wait 770918 00:28:28.826 17:53:50 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:28:28.826 17:53:50 -- host/digest.sh@54 -- # local rw bs qd 00:28:28.826 17:53:50 -- host/digest.sh@56 -- # rw=randread 00:28:28.826 17:53:50 -- host/digest.sh@56 -- # bs=131072 00:28:28.826 17:53:50 -- host/digest.sh@56 -- # qd=16 00:28:28.826 17:53:50 -- host/digest.sh@58 -- # bperfpid=771515 00:28:28.826 17:53:50 -- host/digest.sh@60 -- # waitforlisten 771515 /var/tmp/bperf.sock 00:28:28.826 17:53:50 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:28.826 17:53:50 -- common/autotest_common.sh@819 -- # '[' -z 771515 ']' 00:28:28.826 17:53:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.826 17:53:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:28.826 17:53:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.826 17:53:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:28.826 17:53:50 -- common/autotest_common.sh@10 -- # set +x 00:28:28.826 [2024-07-24 17:53:50.274078] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:28.826 [2024-07-24 17:53:50.274128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771515 ] 00:28:28.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.826 Zero copy mechanism will not be used. 00:28:28.826 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.826 [2024-07-24 17:53:50.327884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.826 [2024-07-24 17:53:50.405313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.764 17:53:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:29.764 17:53:51 -- common/autotest_common.sh@852 -- # return 0 00:28:29.764 17:53:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.764 17:53:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.764 17:53:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.764 17:53:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.764 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:28:29.764 17:53:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.764 17:53:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.764 17:53:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.023 nvme0n1 00:28:30.023 17:53:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:30.023 17:53:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.023 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:28:30.023 17:53:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.023 17:53:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.023 17:53:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.023 Zero copy mechanism will not be used. 00:28:30.023 Running I/O for 2 seconds... 00:28:30.023 [2024-07-24 17:53:51.607272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.023 [2024-07-24 17:53:51.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-07-24 17:53:51.607318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.622169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.622194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.622203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.635243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.635263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.635271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.648059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.648078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.648086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.660893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.660912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.660920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.673557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.673576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.673585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.686330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.686349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.686357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.699146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.699165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.699173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.711836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.711854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.711862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.724645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.724663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.724671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.737617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.737636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.737644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.750456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.750475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.750483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.763253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.763272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.763280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.776130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.776151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.776162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.789128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.789148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.789156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.801834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.801853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.801861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.814751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.814770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.814793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.828542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.828562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.828570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.842325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.842344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.842352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.856096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.856116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.856124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.282 [2024-07-24 17:53:51.869247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.282 [2024-07-24 17:53:51.869265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.282 [2024-07-24 17:53:51.869273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.882005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.882026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.882034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.894841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.894864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.894872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.907550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.907569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.907577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.920329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.920349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.920357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.933126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.933145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.933153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.945906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.945926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.958890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.958910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.958917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.971754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.971773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.971781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.984550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.984569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.984577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:51.997247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:51.997267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:51.997274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:52.009972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:52.009992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:52.010000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:52.022762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:52.022781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:52.022789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:52.035544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:52.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:52.035572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:52.048313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.542 [2024-07-24 17:53:52.048333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.542 [2024-07-24 17:53:52.048340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.542 [2024-07-24 17:53:52.061079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.061099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.061106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.073874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.073894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.073902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.086937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.086957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.086964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.100038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.100064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.100072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.112878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.112897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.112908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.125628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.125647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.125655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.543 [2024-07-24 17:53:52.138600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.543 [2024-07-24 17:53:52.138620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.543 [2024-07-24 17:53:52.138628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.802 [2024-07-24 17:53:52.151449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.802 [2024-07-24 17:53:52.151469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-07-24 17:53:52.151477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.802 [2024-07-24 17:53:52.164306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.802 [2024-07-24 17:53:52.164327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-07-24 17:53:52.164335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.802 [2024-07-24 17:53:52.177305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.802 [2024-07-24 17:53:52.177325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-07-24 17:53:52.177333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.802 [2024-07-24 17:53:52.190129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.190148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.190156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.202924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.202944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.202951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.215983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.216010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.228827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.228850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.228858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.241588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.241608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.241616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.254401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.254429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.267225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.267245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.267252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.280310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.280330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.293289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.293309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.293317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.306079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.306099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.306107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.318949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.318968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.318976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.331711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.331731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.331739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.344683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.344703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.344710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.357366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.357385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.357393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.370920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.370939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.370947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.385025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.385061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.385071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.803 [2024-07-24 17:53:52.397972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:30.803 [2024-07-24 17:53:52.397991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.803 [2024-07-24 17:53:52.397999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.411267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.411287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.411295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.424321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.424340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.424348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.438063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.438082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.438090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.451224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.451247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.451255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.463995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.464014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.464022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.477235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.477254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.477262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.498320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.498340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.498348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.512298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.512317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.512324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.525039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.525064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.525072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.537965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.537984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.537992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.559536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.559556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.559564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.574064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.574083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.574091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.590919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.590939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.590947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.615874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.615894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.632329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.063 [2024-07-24 17:53:52.632348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.063 [2024-07-24 17:53:52.632356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.063 [2024-07-24 17:53:52.651833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.064 [2024-07-24 17:53:52.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.064 [2024-07-24 17:53:52.651860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.323 [2024-07-24 17:53:52.669587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.323 [2024-07-24 17:53:52.669607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.323 [2024-07-24 17:53:52.669615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.323 [2024-07-24 17:53:52.682420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.323 [2024-07-24 17:53:52.682439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.323 [2024-07-24 17:53:52.682447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.323 [2024-07-24 17:53:52.701723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.323 [2024-07-24 17:53:52.701741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.323 [2024-07-24 17:53:52.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.323 [2024-07-24 17:53:52.718559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.718579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.718586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.732816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.732835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.732845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.748905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.748925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.748933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.762895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.762915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.762922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.778159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.778179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.778187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.791285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.791306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.791314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.804109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.804130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.804137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.817324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.817343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.817350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.830914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.830933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.830941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.853019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.853038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.867082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.867105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.867112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.880166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.880185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.880193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.894240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.894260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.894267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.907326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.907345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.907353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.324 [2024-07-24 17:53:52.920239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.324 [2024-07-24 17:53:52.920259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.324 [2024-07-24 17:53:52.920267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.583 [2024-07-24 17:53:52.933461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:52.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:52.933489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:52.946999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:52.947018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:52.947026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:52.960368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:52.960387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:52.960395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:52.974606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:52.974626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:52.974633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:52.987751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:52.987770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:52.987778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.000653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.000672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.000679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.013822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.013841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.013848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.027308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.027327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.027335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.046629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.046650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.063800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.063820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.063827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.086289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.086308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.086316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.100033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.100057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.100065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.122269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.122289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.122300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.137649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.137668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.137676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.150494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.150513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.150520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.163359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.163379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.163387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.584 [2024-07-24 17:53:53.176263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.584 [2024-07-24 17:53:53.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.584 [2024-07-24 17:53:53.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.188981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.189001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.189009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.201641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.201659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.201667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.214508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.214528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.214536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.227399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.227425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.240198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.240217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.240224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.253225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.253245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.253252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.266074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.266093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.266101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.278935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.278954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.278962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.291785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.291804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.291811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.304877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.304896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.304904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.317816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.317835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.317843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.330758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.330776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.330784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.343705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.343724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.343734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.356574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.356593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.356601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.844 [2024-07-24 17:53:53.369378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.844 [2024-07-24 17:53:53.369398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.844 [2024-07-24 17:53:53.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.845 [2024-07-24 17:53:53.382236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.845 [2024-07-24 17:53:53.382256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.845 [2024-07-24 17:53:53.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.845 [2024-07-24 17:53:53.395302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.845 [2024-07-24 17:53:53.395322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.845 [2024-07-24 17:53:53.395330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.845 [2024-07-24 17:53:53.408258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.845 [2024-07-24 17:53:53.408279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.845 [2024-07-24 17:53:53.408288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.845 [2024-07-24 17:53:53.421094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.845 [2024-07-24 17:53:53.421114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.845 [2024-07-24 17:53:53.421122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.845 [2024-07-24 17:53:53.433964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:31.845 [2024-07-24 17:53:53.433983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.845 [2024-07-24 17:53:53.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.103 [2024-07-24 17:53:53.447148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.103 [2024-07-24 17:53:53.447169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.103 [2024-07-24 17:53:53.447177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.103 [2024-07-24 17:53:53.460295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.460321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.460328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.473185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.473205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.473213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.486167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.486186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.486194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.499162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.499181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.499189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.512246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.512265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.512273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.525343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.525370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.538384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.538403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.538411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.551495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.551522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.564640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.564660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.564669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.104 [2024-07-24 17:53:53.577912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff4820) 00:28:32.104 [2024-07-24 17:53:53.577933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.104 [2024-07-24 17:53:53.577941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.104 00:28:32.104 Latency(us) 00:28:32.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.104 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.104 nvme0n1 : 2.00 2227.90 278.49 0.00 0.00 7177.35 6154.69 27240.18 00:28:32.104 =================================================================================================================== 00:28:32.104 Total : 2227.90 278.49 0.00 0.00 7177.35 6154.69 27240.18 00:28:32.104 0 00:28:32.104 17:53:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.104 17:53:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.104 17:53:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.104 | .driver_specific 00:28:32.104 | .nvme_error 00:28:32.104 | .status_code 00:28:32.104 | .command_transient_transport_error' 00:28:32.104 17:53:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.362 17:53:53 -- host/digest.sh@71 -- # (( 144 > 0 )) 00:28:32.362 17:53:53 -- host/digest.sh@73 -- # killprocess 771515 00:28:32.362 17:53:53 -- common/autotest_common.sh@926 -- # '[' -z 771515 ']' 00:28:32.363 17:53:53 -- common/autotest_common.sh@930 -- # kill -0 771515 00:28:32.363 17:53:53 -- common/autotest_common.sh@931 -- # uname 00:28:32.363 17:53:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:32.363 17:53:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 771515 00:28:32.363 17:53:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:32.363 17:53:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:32.363 17:53:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 771515' 00:28:32.363 killing process with pid 771515 00:28:32.363 17:53:53 -- common/autotest_common.sh@945 -- # kill 771515 00:28:32.363 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.363 00:28:32.363 Latency(us) 00:28:32.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.363 =================================================================================================================== 00:28:32.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.363 17:53:53 -- common/autotest_common.sh@950 -- # wait 771515 00:28:32.622 17:53:54 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:28:32.622 17:53:54 -- host/digest.sh@54 -- # local rw bs qd 00:28:32.622 17:53:54 -- host/digest.sh@56 -- # rw=randwrite 00:28:32.622 17:53:54 -- host/digest.sh@56 -- # bs=4096 00:28:32.622 17:53:54 -- host/digest.sh@56 -- # qd=128 00:28:32.622 17:53:54 -- host/digest.sh@58 -- # bperfpid=772223 00:28:32.622 17:53:54 -- host/digest.sh@60 -- # waitforlisten 772223 /var/tmp/bperf.sock 00:28:32.622 17:53:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:32.622 17:53:54 -- common/autotest_common.sh@819 -- # '[' -z 772223 ']' 00:28:32.622 17:53:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.622 17:53:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:32.622 17:53:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.622 17:53:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:32.622 17:53:54 -- common/autotest_common.sh@10 -- # set +x 00:28:32.622 [2024-07-24 17:53:54.071040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:32.622 [2024-07-24 17:53:54.071093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772223 ] 00:28:32.622 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.622 [2024-07-24 17:53:54.123705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.622 [2024-07-24 17:53:54.200954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.560 17:53:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:33.560 17:53:54 -- common/autotest_common.sh@852 -- # return 0 00:28:33.560 17:53:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.560 17:53:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.560 17:53:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:33.560 17:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.560 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:28:33.561 17:53:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.561 17:53:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.561 17:53:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.820 nvme0n1 00:28:34.079 17:53:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.079 17:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.079 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:28:34.079 17:53:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.079 17:53:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.079 17:53:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.079 Running I/O for 2 seconds... 00:28:34.079 [2024-07-24 17:53:55.537210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fda78 00:28:34.079 [2024-07-24 17:53:55.538950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.538977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.550591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd208 00:28:34.079 [2024-07-24 17:53:55.552066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.552087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.561435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fe2e8 00:28:34.079 [2024-07-24 17:53:55.562431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.562450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.570134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ff3c8 00:28:34.079 [2024-07-24 17:53:55.571416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.571436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.579143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f31b8 00:28:34.079 [2024-07-24 17:53:55.580715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.580734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.588078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7100 00:28:34.079 [2024-07-24 17:53:55.589632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.079 [2024-07-24 17:53:55.589650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:34.079 [2024-07-24 17:53:55.598805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb048 00:28:34.080 [2024-07-24 17:53:55.600285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.600304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.608781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fa7d8 00:28:34.080 [2024-07-24 17:53:55.609535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.609554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.618724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f0bc0 00:28:34.080 [2024-07-24 17:53:55.620208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.620227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.629537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f8a50 00:28:34.080 [2024-07-24 17:53:55.631291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.631309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.640034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f8e88 00:28:34.080 [2024-07-24 17:53:55.641213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.641232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.649480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f4b08 00:28:34.080 [2024-07-24 17:53:55.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.650882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.659657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eea00 00:28:34.080 [2024-07-24 17:53:55.660846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.660867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.080 [2024-07-24 17:53:55.668968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f6890 00:28:34.080 [2024-07-24 17:53:55.670192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.080 [2024-07-24 17:53:55.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.681610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f2510 00:28:34.340 [2024-07-24 17:53:55.682608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.682626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.690226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f9f68 00:28:34.340 [2024-07-24 17:53:55.691808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.691826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.700499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f3a28 00:28:34.340 [2024-07-24 17:53:55.701388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.701406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.709526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fdeb0 00:28:34.340 [2024-07-24 17:53:55.710912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.710930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.717961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fda78 00:28:34.340 [2024-07-24 17:53:55.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.718467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.728302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7da8 00:28:34.340 [2024-07-24 17:53:55.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.340 [2024-07-24 17:53:55.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:34.340 [2024-07-24 17:53:55.736463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fc128 00:28:34.341 [2024-07-24 17:53:55.737468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.737485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.745379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ee190 00:28:34.341 [2024-07-24 17:53:55.746597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.746617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.754268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f35f0 00:28:34.341 [2024-07-24 17:53:55.755620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.755638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.763208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7100 00:28:34.341 [2024-07-24 17:53:55.764774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.764792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.773173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ef270 00:28:34.341 [2024-07-24 17:53:55.773965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.773983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.782112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e6300 00:28:34.341 [2024-07-24 17:53:55.782938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.782955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.791007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f5378 00:28:34.341 [2024-07-24 17:53:55.791801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.800195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e5658 00:28:34.341 [2024-07-24 17:53:55.801482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.801500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.809245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fa3a0 00:28:34.341 [2024-07-24 17:53:55.810462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.818282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f1868 00:28:34.341 [2024-07-24 17:53:55.819641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.819659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.826890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f1868 00:28:34.341 [2024-07-24 17:53:55.827752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.827770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.834981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e95a0 00:28:34.341 [2024-07-24 17:53:55.835517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.835534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.843930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fa3a0 00:28:34.341 [2024-07-24 17:53:55.844833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.844851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.852914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f0bc0 00:28:34.341 [2024-07-24 17:53:55.853486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.853503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.861958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fe720 00:28:34.341 [2024-07-24 17:53:55.862495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.862512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.870889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e6300 00:28:34.341 [2024-07-24 17:53:55.871514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.871533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.879819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f8a50 00:28:34.341 [2024-07-24 17:53:55.880235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.880253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.888740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fdeb0 00:28:34.341 [2024-07-24 17:53:55.889278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.889296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.897710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f6cc8 00:28:34.341 [2024-07-24 17:53:55.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.898225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.907273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.341 [2024-07-24 17:53:55.907729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.907746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.916599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.341 [2024-07-24 17:53:55.916869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.916887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.925921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.341 [2024-07-24 17:53:55.926187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.926205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.341 [2024-07-24 17:53:55.935408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.341 [2024-07-24 17:53:55.935682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.341 [2024-07-24 17:53:55.935699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.945026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.945312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.945329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.954365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.954637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.954655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.963649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.963922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.963939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.972948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.973221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.973239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.982254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.982532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.982552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:55.991532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:55.991803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:55.991820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.000826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.001105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.010124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.010397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.010414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.019421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.019694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.019712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.028662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.028934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.028951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.037934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.038204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.038221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.047253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.047548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.047566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.056840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.057117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.057134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.066277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.066554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.602 [2024-07-24 17:53:56.066570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.602 [2024-07-24 17:53:56.075738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.602 [2024-07-24 17:53:56.076008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.085024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.085299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.085316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.094316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.094589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.103649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.103918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.103935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.112956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.113232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.113249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.122317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.122587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.122604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.131664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.131933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.131950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.140955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.141234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.141252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.150265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.150540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.150557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.159582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.159855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.159872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.168892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.169166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.169184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.178225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.178496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.178513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.187493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.187767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.187784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.603 [2024-07-24 17:53:56.196896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.603 [2024-07-24 17:53:56.197174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.603 [2024-07-24 17:53:56.197191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.206406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.206679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.206696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.215693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.215966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.215984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.225033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.225328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.225348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.234469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.234742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.234760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.243807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.244091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.244108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.253091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.253354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.253371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.262350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.262629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.262648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.271912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.272189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.272207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.281221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.281503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.281521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.290590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.290867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.290885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.300199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.300481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.300498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.309798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.310086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.310104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.319390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.319676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.319694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.328992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.329287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.329304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.338563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.338838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.338856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.348159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.348440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.348458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.357653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.357934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.357951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.367053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.367329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.367347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.376436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.376729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.385736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.386007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.386024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.395038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.395323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.395341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.404371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.404644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.404661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.413669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.413938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.423192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.423466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.423483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.432485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.864 [2024-07-24 17:53:56.432758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.864 [2024-07-24 17:53:56.432775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.864 [2024-07-24 17:53:56.441787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.865 [2024-07-24 17:53:56.442068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.865 [2024-07-24 17:53:56.442086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.865 [2024-07-24 17:53:56.451089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.865 [2024-07-24 17:53:56.451365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.865 [2024-07-24 17:53:56.451382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.865 [2024-07-24 17:53:56.460573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:34.865 [2024-07-24 17:53:56.460853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.865 [2024-07-24 17:53:56.460871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.470161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.470438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.470458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.479583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.479858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.479875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.489014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.489301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.489318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.498332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.498610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.498627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.507666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.507937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.507954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.516969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.517255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.517273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.526293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.526623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.526642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.535631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.535908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.535925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.544944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.545222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.545240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.554343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.554637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.554658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.563932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.564224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.573422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.573695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.125 [2024-07-24 17:53:56.573713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.125 [2024-07-24 17:53:56.582969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.125 [2024-07-24 17:53:56.583261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.583278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.592372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.592644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.592662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.601704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.601979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.601997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.611009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.611286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.611305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.620335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.620599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.620616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.629630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.629892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.629909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.638950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.639227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.639245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.648234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.648497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.657649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.657922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.657939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.667181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.667450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.676651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.677476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.677493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.685975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.686242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.686260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.695278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.695541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.695558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.704629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.704888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.704906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.126 [2024-07-24 17:53:56.713871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.126 [2024-07-24 17:53:56.714154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.126 [2024-07-24 17:53:56.714171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.723401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.386 [2024-07-24 17:53:56.723680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.723698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.732919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.386 [2024-07-24 17:53:56.733401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.733418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.742246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ea680 00:28:35.386 [2024-07-24 17:53:56.742956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.742973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.754639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f8a50 00:28:35.386 [2024-07-24 17:53:56.755617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.755635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.764286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.764662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.764680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.773749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.774529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.774546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.783160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.783901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.792513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.792864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.792882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.801839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.802071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.802092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.811169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.811392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.811410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.820684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.820926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.830075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.830308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.839466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.839712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.848743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.849165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.858040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.858269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.858286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.867363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.867589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.867605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.876617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.876970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.885978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.386 [2024-07-24 17:53:56.886210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.386 [2024-07-24 17:53:56.886227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.386 [2024-07-24 17:53:56.895337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.387 [2024-07-24 17:53:56.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.895934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.906334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eaef0 00:28:35.387 [2024-07-24 17:53:56.907652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.907669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.915976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.387 [2024-07-24 17:53:56.917172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.917189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.924976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ec840 00:28:35.387 [2024-07-24 17:53:56.926116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.926133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.933925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fc128 00:28:35.387 [2024-07-24 17:53:56.935124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.935141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.946326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eaef0 00:28:35.387 [2024-07-24 17:53:56.948372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.948389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.958926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.387 [2024-07-24 17:53:56.959366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.968229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.387 [2024-07-24 17:53:56.968630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.968647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.387 [2024-07-24 17:53:56.977562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.387 [2024-07-24 17:53:56.977843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.387 [2024-07-24 17:53:56.977860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:56.987208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.647 [2024-07-24 17:53:56.988052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:56.988070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:56.996635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.647 [2024-07-24 17:53:56.997837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:56.997854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.011278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f9f68 00:28:35.647 [2024-07-24 17:53:57.012786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.012803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.020754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7da8 00:28:35.647 [2024-07-24 17:53:57.021485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.021503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.030035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7da8 00:28:35.647 [2024-07-24 17:53:57.030285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.030303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.039276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7da8 00:28:35.647 [2024-07-24 17:53:57.039476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.039493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.048440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f7da8 00:28:35.647 [2024-07-24 17:53:57.049344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.049362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.061245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f4298 00:28:35.647 [2024-07-24 17:53:57.062651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.062673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.071926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebb98 00:28:35.647 [2024-07-24 17:53:57.072302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.647 [2024-07-24 17:53:57.072321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.647 [2024-07-24 17:53:57.081412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebb98 00:28:35.648 [2024-07-24 17:53:57.082240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.090800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebb98 00:28:35.648 [2024-07-24 17:53:57.091182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.091199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.100151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebb98 00:28:35.648 [2024-07-24 17:53:57.100368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.109405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebb98 00:28:35.648 [2024-07-24 17:53:57.110952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.110968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.121641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fe2e8 00:28:35.648 [2024-07-24 17:53:57.122836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.122854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.131427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f0788 00:28:35.648 [2024-07-24 17:53:57.132379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.132397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.142073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fef90 00:28:35.648 [2024-07-24 17:53:57.143183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.143201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.152439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f5378 00:28:35.648 [2024-07-24 17:53:57.153358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.153376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.161847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.648 [2024-07-24 17:53:57.162607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.162625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.171187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.648 [2024-07-24 17:53:57.171405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.171422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.180411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.648 [2024-07-24 17:53:57.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.180653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.189823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.648 [2024-07-24 17:53:57.190040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.190063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.199148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190eb328 00:28:35.648 [2024-07-24 17:53:57.199981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.210745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ebfd0 00:28:35.648 [2024-07-24 17:53:57.212227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.212245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.221236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.648 [2024-07-24 17:53:57.221777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.221795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.230717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.648 [2024-07-24 17:53:57.230944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.648 [2024-07-24 17:53:57.240180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.648 [2024-07-24 17:53:57.240413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.648 [2024-07-24 17:53:57.240430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.908 [2024-07-24 17:53:57.249769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.908 [2024-07-24 17:53:57.249997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.908 [2024-07-24 17:53:57.250017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.908 [2024-07-24 17:53:57.259066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.908 [2024-07-24 17:53:57.259472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.908 [2024-07-24 17:53:57.259490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.908 [2024-07-24 17:53:57.268610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.268839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.268857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.277897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.278236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.278254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.287231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.287780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.287798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.296575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.296990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.297008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.305924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.306284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.306302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.315246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.315993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.316013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.324790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.325027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.325049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.334292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.334704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.334722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.343728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.344209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.344226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.353088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.353454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.353471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.362443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.362739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.362757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.371792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.372085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.372103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.381216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.381488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.390512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fb480 00:28:35.909 [2024-07-24 17:53:57.391412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.391429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.404337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e5ec8 00:28:35.909 [2024-07-24 17:53:57.405652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.405673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.413504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f6020 00:28:35.909 [2024-07-24 17:53:57.414328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.414346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.422176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190ef6a8 00:28:35.909 [2024-07-24 17:53:57.422899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.422917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.431066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e9168 00:28:35.909 [2024-07-24 17:53:57.431781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.431799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.441030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190fd640 00:28:35.909 [2024-07-24 17:53:57.443066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.443084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.452735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190f4b08 00:28:35.909 [2024-07-24 17:53:57.453716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.453733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.462124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:35.909 [2024-07-24 17:53:57.462337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.462355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.471441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:35.909 [2024-07-24 17:53:57.471648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.471667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.480727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:35.909 [2024-07-24 17:53:57.480933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.480951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.490093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:35.909 [2024-07-24 17:53:57.490453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.490471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:35.909 [2024-07-24 17:53:57.499395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:35.909 [2024-07-24 17:53:57.500105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.909 [2024-07-24 17:53:57.500123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:36.169 [2024-07-24 17:53:57.508970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea02a0) with pdu=0x2000190e73e0 00:28:36.169 [2024-07-24 17:53:57.509174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.169 [2024-07-24 17:53:57.509208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:36.169 00:28:36.169 Latency(us) 00:28:36.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.169 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.169 nvme0n1 : 2.01 26431.83 103.25 0.00 0.00 4831.53 2165.54 20173.69 00:28:36.169 =================================================================================================================== 00:28:36.169 Total : 26431.83 103.25 0.00 0.00 4831.53 2165.54 20173.69 00:28:36.169 0 00:28:36.169 17:53:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.169 17:53:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.169 17:53:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.169 | .driver_specific 00:28:36.169 | .nvme_error 00:28:36.169 | .status_code 00:28:36.169 | .command_transient_transport_error' 00:28:36.169 17:53:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.169 17:53:57 -- host/digest.sh@71 -- # (( 207 > 0 )) 00:28:36.169 17:53:57 -- host/digest.sh@73 -- # killprocess 772223 00:28:36.169 17:53:57 -- common/autotest_common.sh@926 -- # '[' -z 772223 ']' 00:28:36.169 17:53:57 -- common/autotest_common.sh@930 -- # kill -0 772223 00:28:36.169 17:53:57 -- common/autotest_common.sh@931 -- # uname 00:28:36.169 17:53:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:36.169 17:53:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 772223 00:28:36.429 17:53:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:36.429 17:53:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:36.429 17:53:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 772223' 00:28:36.429 killing process with pid 772223 00:28:36.429 17:53:57 -- common/autotest_common.sh@945 -- # kill 772223 00:28:36.429 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.429 00:28:36.429 Latency(us) 00:28:36.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.429 =================================================================================================================== 00:28:36.429 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.429 17:53:57 -- common/autotest_common.sh@950 -- # wait 772223 00:28:36.429 17:53:57 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:28:36.429 17:53:57 -- host/digest.sh@54 -- # local rw bs qd 00:28:36.429 17:53:57 -- host/digest.sh@56 -- # rw=randwrite 00:28:36.429 17:53:57 -- host/digest.sh@56 -- # bs=131072 00:28:36.429 17:53:57 -- host/digest.sh@56 -- # qd=16 00:28:36.429 17:53:57 -- host/digest.sh@58 -- # bperfpid=772873 00:28:36.429 17:53:57 -- host/digest.sh@60 -- # waitforlisten 772873 /var/tmp/bperf.sock 00:28:36.429 17:53:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:36.429 17:53:57 -- common/autotest_common.sh@819 -- # '[' -z 772873 ']' 00:28:36.430 17:53:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.430 17:53:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:36.430 17:53:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.430 17:53:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:36.430 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:28:36.430 [2024-07-24 17:53:58.013956] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:36.430 [2024-07-24 17:53:58.014003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772873 ] 00:28:36.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.430 Zero copy mechanism will not be used. 00:28:36.689 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.689 [2024-07-24 17:53:58.066443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.689 [2024-07-24 17:53:58.143941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.259 17:53:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:37.259 17:53:58 -- common/autotest_common.sh@852 -- # return 0 00:28:37.259 17:53:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.259 17:53:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.519 17:53:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.519 17:53:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.519 17:53:58 -- common/autotest_common.sh@10 -- # set +x 00:28:37.519 17:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.519 17:53:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.519 17:53:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.779 nvme0n1 00:28:37.779 17:53:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:37.779 17:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.779 17:53:59 -- common/autotest_common.sh@10 -- # set +x 00:28:38.038 17:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.039 17:53:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.039 17:53:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.039 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.039 Zero copy mechanism will not be used. 00:28:38.039 Running I/O for 2 seconds... 00:28:38.039 [2024-07-24 17:53:59.508557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.508902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.508928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.526584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.527065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.527088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.545898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.546345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.546366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.565153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.565681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.565701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.584484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.585070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.585090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.604308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.604566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.604584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.039 [2024-07-24 17:53:59.622256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.039 [2024-07-24 17:53:59.622937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.039 [2024-07-24 17:53:59.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.298 [2024-07-24 17:53:59.640652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.298 [2024-07-24 17:53:59.641367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.298 [2024-07-24 17:53:59.641386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.298 [2024-07-24 17:53:59.660023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.298 [2024-07-24 17:53:59.660542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.298 [2024-07-24 17:53:59.660560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.298 [2024-07-24 17:53:59.678482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.298 [2024-07-24 17:53:59.679060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.298 [2024-07-24 17:53:59.679079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.298 [2024-07-24 17:53:59.696618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.697107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.715843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.716456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.716474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.738031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.738730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.738749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.758734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.759447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.780532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.781162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.781181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.801594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.802409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.822404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.822991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.823010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.843131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.843805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.843823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.864588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.864930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.299 [2024-07-24 17:53:59.884535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.299 [2024-07-24 17:53:59.885074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.299 [2024-07-24 17:53:59.885093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:53:59.907032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:53:59.907632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:53:59.907650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:53:59.928775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:53:59.929215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:53:59.929234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:53:59.951324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:53:59.951872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:53:59.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:53:59.972584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:53:59.973182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:53:59.973201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:53:59.992709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:53:59.993315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:53:59.993334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.011968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.012519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.032296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.032688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.050807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.051244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.051271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.069762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.070206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.070228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.090222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.090813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.090833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.109937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.110640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.110659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.128851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.129305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.129324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.559 [2024-07-24 17:54:00.146619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.559 [2024-07-24 17:54:00.147142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-07-24 17:54:00.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.167219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.167665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.167683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.187900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.188671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.188690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.209223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.209906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.209924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.229734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.230413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.230433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.250468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.251065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.251084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.271138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.271815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.271833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.291619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.292525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.292544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.312824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.313523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.313541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.332875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.333625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.351685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.352062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.352080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.370871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.371391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.371410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.391312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.391848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.819 [2024-07-24 17:54:00.391868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.819 [2024-07-24 17:54:00.410199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:38.819 [2024-07-24 17:54:00.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.820 [2024-07-24 17:54:00.410799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.429784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.430229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.430248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.449412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.450048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.450066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.468837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.469447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.485876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.486311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.486331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.504909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.505494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.505513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.524213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.524572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.524591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.544132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.544804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.544822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.564576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.565194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.565217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.585539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.586202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.586221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.607260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.607953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.607973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.627995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.628702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.648634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.649181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.079 [2024-07-24 17:54:00.669095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.079 [2024-07-24 17:54:00.669701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.079 [2024-07-24 17:54:00.669719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.689005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.689542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.689560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.708412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.708717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.708736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.728147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.728710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.728728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.745012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.745592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.763474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.764165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.764184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.782210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.782658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.782677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.799984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.800342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.800360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.819866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.820516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.820535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.840102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.840580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.840599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.859418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.859969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.859988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.879811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.880495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.880514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.898176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.898670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.898688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.338 [2024-07-24 17:54:00.917951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.338 [2024-07-24 17:54:00.918469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.338 [2024-07-24 17:54:00.918488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:00.939322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:00.939750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:00.939770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:00.959815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:00.960488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:00.960508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:00.980519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:00.980977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:00.980995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.000069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.000827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.000847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.020229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.020752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.020770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.041258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.041868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.041886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.062767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.063309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.063327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.082897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.083722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.103118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.103569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.103589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.124726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.125163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.125182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.145093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.145587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.145605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.163314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.163756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.163774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.598 [2024-07-24 17:54:01.182184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.598 [2024-07-24 17:54:01.182842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.598 [2024-07-24 17:54:01.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.201003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.201548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.201565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.220474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.220834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.220852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.240861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.241583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.241601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.260562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.261036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.261060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.279281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.279814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.279832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.298820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.299245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.299264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.320229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.320879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.320897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.342040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.342577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.342594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.363080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.363997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.384681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.385227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.385246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.405278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.406009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.856 [2024-07-24 17:54:01.406027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.856 [2024-07-24 17:54:01.428160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.856 [2024-07-24 17:54:01.428799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.857 [2024-07-24 17:54:01.428818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.857 [2024-07-24 17:54:01.450412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:39.857 [2024-07-24 17:54:01.450774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.857 [2024-07-24 17:54:01.450808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.115 [2024-07-24 17:54:01.471856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xea0440) with pdu=0x2000190fef90 00:28:40.115 [2024-07-24 17:54:01.472272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.115 [2024-07-24 17:54:01.472290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.115 00:28:40.115 Latency(us) 00:28:40.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.115 nvme0n1 : 2.01 1532.58 191.57 0.00 0.00 10408.08 7038.00 31685.23 00:28:40.115 =================================================================================================================== 00:28:40.115 Total : 1532.58 191.57 0.00 0.00 10408.08 7038.00 31685.23 00:28:40.115 0 00:28:40.115 17:54:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.115 17:54:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.115 17:54:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.115 | .driver_specific 00:28:40.115 | .nvme_error 00:28:40.115 | .status_code 00:28:40.115 | .command_transient_transport_error' 00:28:40.115 17:54:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.115 17:54:01 -- host/digest.sh@71 -- # (( 99 > 0 )) 00:28:40.115 17:54:01 -- host/digest.sh@73 -- # killprocess 772873 00:28:40.115 17:54:01 -- common/autotest_common.sh@926 -- # '[' -z 772873 ']' 00:28:40.115 17:54:01 -- common/autotest_common.sh@930 -- # kill -0 772873 00:28:40.115 17:54:01 -- common/autotest_common.sh@931 -- # uname 00:28:40.115 17:54:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:40.115 17:54:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 772873 00:28:40.375 17:54:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:40.375 17:54:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:40.375 17:54:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 772873' 00:28:40.375 killing process with pid 772873 00:28:40.375 17:54:01 -- common/autotest_common.sh@945 -- # kill 772873 00:28:40.375 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.375 00:28:40.375 Latency(us) 00:28:40.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.375 =================================================================================================================== 00:28:40.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.375 17:54:01 -- common/autotest_common.sh@950 -- # wait 772873 00:28:40.375 17:54:01 -- host/digest.sh@115 -- # killprocess 770773 00:28:40.375 17:54:01 -- common/autotest_common.sh@926 -- # '[' -z 770773 ']' 00:28:40.375 17:54:01 -- common/autotest_common.sh@930 -- # kill -0 770773 00:28:40.375 17:54:01 -- common/autotest_common.sh@931 -- # uname 00:28:40.375 17:54:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:40.375 17:54:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 770773 00:28:40.375 17:54:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:40.634 17:54:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:40.634 17:54:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 770773' 00:28:40.634 killing process with pid 770773 00:28:40.634 17:54:01 -- common/autotest_common.sh@945 -- # kill 770773 00:28:40.634 17:54:01 -- common/autotest_common.sh@950 -- # wait 770773 00:28:40.634 00:28:40.634 real 0m16.775s 00:28:40.634 user 0m32.967s 00:28:40.634 sys 0m3.566s 00:28:40.634 17:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:40.634 17:54:02 -- common/autotest_common.sh@10 -- # set +x 00:28:40.634 ************************************ 00:28:40.634 END TEST nvmf_digest_error 00:28:40.634 ************************************ 00:28:40.634 17:54:02 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:28:40.634 17:54:02 -- host/digest.sh@139 -- # nvmftestfini 00:28:40.634 17:54:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:40.634 17:54:02 -- nvmf/common.sh@116 -- # sync 00:28:40.634 17:54:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:40.634 17:54:02 -- nvmf/common.sh@119 -- # set +e 00:28:40.634 17:54:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:40.635 17:54:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:40.635 rmmod nvme_tcp 00:28:40.894 rmmod nvme_fabrics 00:28:40.894 rmmod nvme_keyring 00:28:40.894 17:54:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:40.894 17:54:02 -- nvmf/common.sh@123 -- # set -e 00:28:40.894 17:54:02 -- nvmf/common.sh@124 -- # return 0 00:28:40.894 17:54:02 -- nvmf/common.sh@477 -- # '[' -n 770773 ']' 00:28:40.894 17:54:02 -- nvmf/common.sh@478 -- # killprocess 770773 00:28:40.894 17:54:02 -- common/autotest_common.sh@926 -- # '[' -z 770773 ']' 00:28:40.894 17:54:02 -- common/autotest_common.sh@930 -- # kill -0 770773 00:28:40.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (770773) - No such process 00:28:40.894 17:54:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 770773 is not found' 00:28:40.894 Process with pid 770773 is not found 00:28:40.894 17:54:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:40.894 17:54:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:40.894 17:54:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:40.894 17:54:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.894 17:54:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:40.894 17:54:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.894 17:54:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.894 17:54:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.805 17:54:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:42.805 00:28:42.805 real 0m41.278s 00:28:42.805 user 1m7.483s 00:28:42.805 sys 0m11.203s 00:28:42.805 17:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.805 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.805 ************************************ 00:28:42.805 END TEST nvmf_digest 00:28:42.805 ************************************ 00:28:42.805 17:54:04 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:42.805 17:54:04 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:42.805 17:54:04 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:42.805 17:54:04 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:42.805 17:54:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:42.805 17:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.805 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.805 ************************************ 00:28:42.805 START TEST nvmf_bdevperf 00:28:42.805 ************************************ 00:28:42.805 17:54:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:43.065 * Looking for test storage... 00:28:43.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.065 17:54:04 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.065 17:54:04 -- nvmf/common.sh@7 -- # uname -s 00:28:43.065 17:54:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.065 17:54:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.065 17:54:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.065 17:54:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.065 17:54:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.065 17:54:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.065 17:54:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.065 17:54:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.065 17:54:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.065 17:54:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.065 17:54:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.065 17:54:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.065 17:54:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.065 17:54:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.065 17:54:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.065 17:54:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.065 17:54:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.065 17:54:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.065 17:54:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.065 17:54:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.065 17:54:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.065 17:54:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.065 17:54:04 -- paths/export.sh@5 -- # export PATH 00:28:43.065 17:54:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.065 17:54:04 -- nvmf/common.sh@46 -- # : 0 00:28:43.065 17:54:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:43.065 17:54:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:43.065 17:54:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:43.065 17:54:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.065 17:54:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.065 17:54:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:43.065 17:54:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:43.065 17:54:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:43.065 17:54:04 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.065 17:54:04 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.065 17:54:04 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:43.065 17:54:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:43.065 17:54:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.065 17:54:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:43.065 17:54:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:43.065 17:54:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:43.065 17:54:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.065 17:54:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.065 17:54:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.065 17:54:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:43.065 17:54:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:43.065 17:54:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:43.065 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:28:48.346 17:54:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:48.346 17:54:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:48.346 17:54:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:48.346 17:54:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:48.346 17:54:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:48.346 17:54:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:48.346 17:54:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:48.346 17:54:09 -- nvmf/common.sh@294 -- # net_devs=() 00:28:48.346 17:54:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:48.346 17:54:09 -- nvmf/common.sh@295 -- # e810=() 00:28:48.346 17:54:09 -- nvmf/common.sh@295 -- # local -ga e810 00:28:48.346 17:54:09 -- nvmf/common.sh@296 -- # x722=() 00:28:48.346 17:54:09 -- nvmf/common.sh@296 -- # local -ga x722 00:28:48.346 17:54:09 -- nvmf/common.sh@297 -- # mlx=() 00:28:48.346 17:54:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:48.346 17:54:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.346 17:54:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:48.346 17:54:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:48.347 17:54:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.347 17:54:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:48.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:48.347 17:54:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.347 17:54:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:48.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:48.347 17:54:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.347 17:54:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.347 17:54:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.347 17:54:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:48.347 Found net devices under 0000:86:00.0: cvl_0_0 00:28:48.347 17:54:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.347 17:54:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.347 17:54:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.347 17:54:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.347 17:54:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:48.347 Found net devices under 0000:86:00.1: cvl_0_1 00:28:48.347 17:54:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.347 17:54:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:48.347 17:54:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:48.347 17:54:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.347 17:54:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.347 17:54:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.347 17:54:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:48.347 17:54:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.347 17:54:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.347 17:54:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:48.347 17:54:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.347 17:54:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.347 17:54:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:48.347 17:54:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:48.347 17:54:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.347 17:54:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.347 17:54:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.347 17:54:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.347 17:54:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:48.347 17:54:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.347 17:54:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.347 17:54:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.347 17:54:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:48.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:28:48.347 00:28:48.347 --- 10.0.0.2 ping statistics --- 00:28:48.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.347 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:48.347 17:54:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:28:48.347 00:28:48.347 --- 10.0.0.1 ping statistics --- 00:28:48.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.347 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:28:48.347 17:54:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.347 17:54:09 -- nvmf/common.sh@410 -- # return 0 00:28:48.347 17:54:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:48.347 17:54:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.347 17:54:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:48.347 17:54:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.347 17:54:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:48.347 17:54:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:48.347 17:54:09 -- host/bdevperf.sh@25 -- # tgt_init 00:28:48.347 17:54:09 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.347 17:54:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:48.347 17:54:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:48.347 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:28:48.347 17:54:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.347 17:54:09 -- nvmf/common.sh@469 -- # nvmfpid=777466 00:28:48.347 17:54:09 -- nvmf/common.sh@470 -- # waitforlisten 777466 00:28:48.347 17:54:09 -- common/autotest_common.sh@819 -- # '[' -z 777466 ']' 00:28:48.347 17:54:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.347 17:54:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:48.347 17:54:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.347 17:54:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:48.347 17:54:09 -- common/autotest_common.sh@10 -- # set +x 00:28:48.347 [2024-07-24 17:54:09.859562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:48.347 [2024-07-24 17:54:09.859603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.347 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.347 [2024-07-24 17:54:09.917039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.608 [2024-07-24 17:54:09.993831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:48.608 [2024-07-24 17:54:09.993938] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.608 [2024-07-24 17:54:09.993945] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.608 [2024-07-24 17:54:09.993952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.608 [2024-07-24 17:54:09.994051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.608 [2024-07-24 17:54:09.994134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.608 [2024-07-24 17:54:09.994136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.205 17:54:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:49.205 17:54:10 -- common/autotest_common.sh@852 -- # return 0 00:28:49.205 17:54:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:49.205 17:54:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 17:54:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.205 17:54:10 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.205 17:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 [2024-07-24 17:54:10.713272] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.205 17:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.205 17:54:10 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.205 17:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 Malloc0 00:28:49.205 17:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.205 17:54:10 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.205 17:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 17:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.205 17:54:10 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.205 17:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 17:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.205 17:54:10 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.205 17:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.205 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:28:49.205 [2024-07-24 17:54:10.785610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.205 17:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.205 17:54:10 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:49.205 17:54:10 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:49.205 17:54:10 -- nvmf/common.sh@520 -- # config=() 00:28:49.205 17:54:10 -- nvmf/common.sh@520 -- # local subsystem config 00:28:49.205 17:54:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:49.205 17:54:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:49.205 { 00:28:49.205 "params": { 00:28:49.205 "name": "Nvme$subsystem", 00:28:49.205 "trtype": "$TEST_TRANSPORT", 00:28:49.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.205 "adrfam": "ipv4", 00:28:49.205 "trsvcid": "$NVMF_PORT", 00:28:49.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.205 "hdgst": ${hdgst:-false}, 00:28:49.205 "ddgst": ${ddgst:-false} 00:28:49.205 }, 00:28:49.205 "method": "bdev_nvme_attach_controller" 00:28:49.205 } 00:28:49.205 EOF 00:28:49.205 )") 00:28:49.205 17:54:10 -- nvmf/common.sh@542 -- # cat 00:28:49.205 17:54:10 -- nvmf/common.sh@544 -- # jq . 00:28:49.472 17:54:10 -- nvmf/common.sh@545 -- # IFS=, 00:28:49.472 17:54:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:49.472 "params": { 00:28:49.472 "name": "Nvme1", 00:28:49.472 "trtype": "tcp", 00:28:49.472 "traddr": "10.0.0.2", 00:28:49.472 "adrfam": "ipv4", 00:28:49.472 "trsvcid": "4420", 00:28:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.472 "hdgst": false, 00:28:49.473 "ddgst": false 00:28:49.473 }, 00:28:49.473 "method": "bdev_nvme_attach_controller" 00:28:49.473 }' 00:28:49.473 [2024-07-24 17:54:10.831065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:49.473 [2024-07-24 17:54:10.831114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777552 ] 00:28:49.473 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.473 [2024-07-24 17:54:10.887659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.473 [2024-07-24 17:54:10.959301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.733 Running I/O for 1 seconds... 00:28:50.671 00:28:50.671 Latency(us) 00:28:50.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:50.671 Verification LBA range: start 0x0 length 0x4000 00:28:50.671 Nvme1n1 : 1.01 16333.07 63.80 0.00 0.00 7804.76 1652.65 23592.96 00:28:50.671 =================================================================================================================== 00:28:50.671 Total : 16333.07 63.80 0.00 0.00 7804.76 1652.65 23592.96 00:28:50.931 17:54:12 -- host/bdevperf.sh@30 -- # bdevperfpid=777852 00:28:50.931 17:54:12 -- host/bdevperf.sh@32 -- # sleep 3 00:28:50.931 17:54:12 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:50.931 17:54:12 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:50.931 17:54:12 -- nvmf/common.sh@520 -- # config=() 00:28:50.931 17:54:12 -- nvmf/common.sh@520 -- # local subsystem config 00:28:50.931 17:54:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:50.931 17:54:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:50.931 { 00:28:50.931 "params": { 00:28:50.931 "name": "Nvme$subsystem", 00:28:50.931 "trtype": "$TEST_TRANSPORT", 00:28:50.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.931 "adrfam": "ipv4", 00:28:50.931 "trsvcid": "$NVMF_PORT", 00:28:50.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.931 "hdgst": ${hdgst:-false}, 00:28:50.931 "ddgst": ${ddgst:-false} 00:28:50.931 }, 00:28:50.931 "method": "bdev_nvme_attach_controller" 00:28:50.931 } 00:28:50.931 EOF 00:28:50.931 )") 00:28:50.931 17:54:12 -- nvmf/common.sh@542 -- # cat 00:28:50.931 17:54:12 -- nvmf/common.sh@544 -- # jq . 00:28:50.931 17:54:12 -- nvmf/common.sh@545 -- # IFS=, 00:28:50.931 17:54:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:50.931 "params": { 00:28:50.931 "name": "Nvme1", 00:28:50.931 "trtype": "tcp", 00:28:50.931 "traddr": "10.0.0.2", 00:28:50.931 "adrfam": "ipv4", 00:28:50.931 "trsvcid": "4420", 00:28:50.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.931 "hdgst": false, 00:28:50.931 "ddgst": false 00:28:50.931 }, 00:28:50.931 "method": "bdev_nvme_attach_controller" 00:28:50.931 }' 00:28:50.931 [2024-07-24 17:54:12.378165] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:50.931 [2024-07-24 17:54:12.378215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777852 ] 00:28:50.931 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.931 [2024-07-24 17:54:12.434472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.931 [2024-07-24 17:54:12.501763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.500 Running I/O for 15 seconds... 00:28:54.042 17:54:15 -- host/bdevperf.sh@33 -- # kill -9 777466 00:28:54.042 17:54:15 -- host/bdevperf.sh@35 -- # sleep 3 00:28:54.042 [2024-07-24 17:54:15.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.042 [2024-07-24 17:54:15.353782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.042 [2024-07-24 17:54:15.353794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.353987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.043 [2024-07-24 17:54:15.354636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.043 [2024-07-24 17:54:15.354692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.043 [2024-07-24 17:54:15.354702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.354928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.354985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.044 [2024-07-24 17:54:15.355526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.044 [2024-07-24 17:54:15.355592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.044 [2024-07-24 17:54:15.355604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.355960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.355982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.355997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.356241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.356265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.356288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.356310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.045 [2024-07-24 17:54:15.356333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.045 [2024-07-24 17:54:15.356494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131bb80 is same with the state(5) to be set 00:28:54.045 [2024-07-24 17:54:15.356518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:54.045 [2024-07-24 17:54:15.356526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.045 [2024-07-24 17:54:15.356537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:28:54.045 [2024-07-24 17:54:15.356547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.045 [2024-07-24 17:54:15.356599] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x131bb80 was disconnected and freed. reset controller. 00:28:54.045 [2024-07-24 17:54:15.358645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.045 [2024-07-24 17:54:15.358713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.045 [2024-07-24 17:54:15.359432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.045 [2024-07-24 17:54:15.359804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.045 [2024-07-24 17:54:15.359820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.045 [2024-07-24 17:54:15.359833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.045 [2024-07-24 17:54:15.359987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.045 [2024-07-24 17:54:15.360128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.045 [2024-07-24 17:54:15.360141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.045 [2024-07-24 17:54:15.360154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.045 [2024-07-24 17:54:15.362010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.045 [2024-07-24 17:54:15.371084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.045 [2024-07-24 17:54:15.371404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.045 [2024-07-24 17:54:15.371643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.045 [2024-07-24 17:54:15.371657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.371668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.371794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.371914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.371924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.371935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.373797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.382981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.383573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.384032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.384089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.384125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.384593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.384994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.385033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.385051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.386890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.395205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.395750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.396217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.396263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.396296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.396670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.396938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.396948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.396956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.398597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.407213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.407717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.408119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.408162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.408198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.408518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.408879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.408888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.408897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.410750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.418934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.419501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.419948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.419990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.420025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.420413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.420665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.420674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.420683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.422431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.430948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.431454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.431905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.431946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.431980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.432240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.432418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.432432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.432446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.435142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.443208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.443763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.444235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.444278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.444312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.444682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.445040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.445057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.445066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.446900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.455174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.455573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.456031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.456094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.456128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.456545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.457046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.457055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.457065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.458740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.466931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.467395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.467846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.046 [2024-07-24 17:54:15.467886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.046 [2024-07-24 17:54:15.467920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.046 [2024-07-24 17:54:15.468235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.046 [2024-07-24 17:54:15.468356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.046 [2024-07-24 17:54:15.468365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.046 [2024-07-24 17:54:15.468375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.046 [2024-07-24 17:54:15.470138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.046 [2024-07-24 17:54:15.478855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.046 [2024-07-24 17:54:15.479490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.479894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.479907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.479916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.480035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.480180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.480189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.480198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.481829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.490691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.491252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.491753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.491794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.491848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.491974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.492071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.492081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.492090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.493844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.502540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.503074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.503545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.503585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.503620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.503923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.504022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.504031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.504040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.505760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.514663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.515281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.515821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.515861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.515895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.516331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.516444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.516454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.516463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.518274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.526555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.527140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.527671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.527711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.527745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.528085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.528219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.528229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.528238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.529996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.538333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.538957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.539224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.539267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.539300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.539670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.539973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.539982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.539990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.541726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.550282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.550805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.551336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.551379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.551412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.551830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.552300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.552325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.552334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.553980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.562172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.562806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.563268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.563281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.047 [2024-07-24 17:54:15.563290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.047 [2024-07-24 17:54:15.563436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.047 [2024-07-24 17:54:15.563616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.047 [2024-07-24 17:54:15.563630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.047 [2024-07-24 17:54:15.563644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.047 [2024-07-24 17:54:15.566204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.047 [2024-07-24 17:54:15.574419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.047 [2024-07-24 17:54:15.575041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.047 [2024-07-24 17:54:15.575524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.575565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.575598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.576009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.576120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.576130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.576140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.048 [2024-07-24 17:54:15.577931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.048 [2024-07-24 17:54:15.586257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.048 [2024-07-24 17:54:15.586868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.587182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.587222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.587257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.587394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.587498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.587508] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.587517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.048 [2024-07-24 17:54:15.589274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.048 [2024-07-24 17:54:15.598139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.048 [2024-07-24 17:54:15.598734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.599199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.599213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.599223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.599342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.599426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.599438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.599447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.048 [2024-07-24 17:54:15.601139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.048 [2024-07-24 17:54:15.610074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.048 [2024-07-24 17:54:15.610729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.611237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.611279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.611313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.611682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.611900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.611910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.611920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.048 [2024-07-24 17:54:15.613746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.048 [2024-07-24 17:54:15.622208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.048 [2024-07-24 17:54:15.622798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.623134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.623148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.623158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.623271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.623395] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.623405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.623414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.048 [2024-07-24 17:54:15.625350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.048 [2024-07-24 17:54:15.634156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.048 [2024-07-24 17:54:15.634824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.635277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.048 [2024-07-24 17:54:15.635330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.048 [2024-07-24 17:54:15.635340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.048 [2024-07-24 17:54:15.635496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.048 [2024-07-24 17:54:15.635604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.048 [2024-07-24 17:54:15.635614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.048 [2024-07-24 17:54:15.635627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.309 [2024-07-24 17:54:15.637357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.309 [2024-07-24 17:54:15.646128] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.309 [2024-07-24 17:54:15.646741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.309 [2024-07-24 17:54:15.647264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.309 [2024-07-24 17:54:15.647307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.309 [2024-07-24 17:54:15.647343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.309 [2024-07-24 17:54:15.647622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.309 [2024-07-24 17:54:15.647741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.647750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.647759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.649587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.657924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.658532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.658775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.658815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.658850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.659285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.659589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.659621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.659651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.661375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.669808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.670268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.670779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.670820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.670854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.671085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.671251] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.671260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.671269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.672922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.681589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.682234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.682761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.682801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.682835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.683285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.683688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.683719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.683746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.685533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.693289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.693903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.694369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.694411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.694445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.694838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.694992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.695006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.695020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.697602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.705620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.706247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.706726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.706765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.706798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.707182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.707584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.707616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.707648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.709453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.717527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.718163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.718640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.718652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.718662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.718766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.718892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.718901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.718910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.720561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.729396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.730015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.730479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.730520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.730554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.730969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.731094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.731104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.731113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.732698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.741345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.741933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.742384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.742425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.742458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.742633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.742745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.742754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.742763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.744620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.310 [2024-07-24 17:54:15.753190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.310 [2024-07-24 17:54:15.753833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.754369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.310 [2024-07-24 17:54:15.754411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.310 [2024-07-24 17:54:15.754444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.310 [2024-07-24 17:54:15.754894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.310 [2024-07-24 17:54:15.754965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.310 [2024-07-24 17:54:15.754974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.310 [2024-07-24 17:54:15.754983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.310 [2024-07-24 17:54:15.756635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.765017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.765627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.766153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.766195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.766228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.766550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.766650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.766658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.766667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.768446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.776855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.777494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.778035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.778091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.778124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.778490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.778593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.778603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.778612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.780560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.788702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.789318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.789853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.789901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.789934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.790469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.790922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.790954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.790986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.792984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.800676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.801319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.801841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.801881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.801914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.802445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.802697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.802728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.802769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.804295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.812659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.813255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.813794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.813835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.813868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.814398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.814858] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.814868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.814877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.816536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.824465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.824999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.825522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.825563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.825608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.826026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.826212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.826222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.826231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.827880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.836112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.836747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.837276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.837319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.837354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.837818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.838065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.838075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.838083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.839772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.848063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.848622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.849128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.849158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.849168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.849287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.849399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.849408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.849417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.851147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.859958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.860513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.860976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.311 [2024-07-24 17:54:15.861018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.311 [2024-07-24 17:54:15.861066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.311 [2024-07-24 17:54:15.861398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.311 [2024-07-24 17:54:15.861899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.311 [2024-07-24 17:54:15.861931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.311 [2024-07-24 17:54:15.861962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.311 [2024-07-24 17:54:15.864158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.311 [2024-07-24 17:54:15.872019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.311 [2024-07-24 17:54:15.872523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.872927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.872967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.312 [2024-07-24 17:54:15.873001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.312 [2024-07-24 17:54:15.873482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.312 [2024-07-24 17:54:15.873934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.312 [2024-07-24 17:54:15.873965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.312 [2024-07-24 17:54:15.874006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.312 [2024-07-24 17:54:15.875775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.312 [2024-07-24 17:54:15.884054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.312 [2024-07-24 17:54:15.884673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.885178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.885220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.312 [2024-07-24 17:54:15.885254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.312 [2024-07-24 17:54:15.885721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.312 [2024-07-24 17:54:15.885945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.312 [2024-07-24 17:54:15.885954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.312 [2024-07-24 17:54:15.885964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.312 [2024-07-24 17:54:15.887604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.312 [2024-07-24 17:54:15.895964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.312 [2024-07-24 17:54:15.896584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.897109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.312 [2024-07-24 17:54:15.897152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.312 [2024-07-24 17:54:15.897185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.312 [2024-07-24 17:54:15.897583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.312 [2024-07-24 17:54:15.897740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.312 [2024-07-24 17:54:15.897754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.312 [2024-07-24 17:54:15.897768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.312 [2024-07-24 17:54:15.900455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.908519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.909166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.909569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.909609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.909643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.910147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.910253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.910262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.910272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.911963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.920334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.920681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.921210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.921253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.921287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.921854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.922041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.922056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.922065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.923647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.932158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.932792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.933261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.933274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.933284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.933431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.933515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.933527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.933536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.935244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.944050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.944684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.945189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.945232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.945267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.945684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.945987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.946019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.946063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.948095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.956011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.956655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.957153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.957166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.957176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.957335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.957435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.957445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.957454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.959047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.967842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.968456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.968962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.969003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.969036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.969355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.969475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.969484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.969501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.971183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.979449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.980088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.980597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.980637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.980670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.981099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.981451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.981494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.981503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.983148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:15.991259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:15.991925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.992438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:15.992481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:15.992515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:15.992935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:15.993249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:15.993282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.574 [2024-07-24 17:54:15.993313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.574 [2024-07-24 17:54:15.995118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.574 [2024-07-24 17:54:16.003257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.574 [2024-07-24 17:54:16.003806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:16.004341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.574 [2024-07-24 17:54:16.004383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.574 [2024-07-24 17:54:16.004416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.574 [2024-07-24 17:54:16.004835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.574 [2024-07-24 17:54:16.005245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.574 [2024-07-24 17:54:16.005278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.005308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.007234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.015297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.015928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.016484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.016527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.016560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.016880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.017130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.017140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.017149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.018799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.027069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.027597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.028125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.028164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.028174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.028321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.028461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.028470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.028479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.030158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.039029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.039649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.040172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.040213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.040246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.040584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.040697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.040706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.040715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.042434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.050799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.051416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.051936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.051977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.052010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.052447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.052625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.052635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.052644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.054386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.062436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.063032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.063595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.063635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.063668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.063919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.064091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.064102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.064111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.065811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.074287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.074820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.075215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.075257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.075290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.075593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.075727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.075736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.075745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.077367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.086146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.086722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.087206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.087248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.087282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.087413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.087526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.087535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.087543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.089272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.097943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.098532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.099002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.099058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.099095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.099570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.099724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.099737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.099751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.575 [2024-07-24 17:54:16.102356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.575 [2024-07-24 17:54:16.110367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.575 [2024-07-24 17:54:16.110985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.111412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.575 [2024-07-24 17:54:16.111427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.575 [2024-07-24 17:54:16.111438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.575 [2024-07-24 17:54:16.111550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.575 [2024-07-24 17:54:16.111685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.575 [2024-07-24 17:54:16.111694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.575 [2024-07-24 17:54:16.111704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.576 [2024-07-24 17:54:16.113564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.576 [2024-07-24 17:54:16.122506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.576 [2024-07-24 17:54:16.123122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.123636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.123684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.576 [2024-07-24 17:54:16.123719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.576 [2024-07-24 17:54:16.123954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.576 [2024-07-24 17:54:16.124061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.576 [2024-07-24 17:54:16.124087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.576 [2024-07-24 17:54:16.124096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.576 [2024-07-24 17:54:16.125724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.576 [2024-07-24 17:54:16.134472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.576 [2024-07-24 17:54:16.135059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.135614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.135655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.576 [2024-07-24 17:54:16.135688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.576 [2024-07-24 17:54:16.135956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.576 [2024-07-24 17:54:16.136083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.576 [2024-07-24 17:54:16.136093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.576 [2024-07-24 17:54:16.136102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.576 [2024-07-24 17:54:16.138038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.576 [2024-07-24 17:54:16.146260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.576 [2024-07-24 17:54:16.146848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.147376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.147418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.576 [2024-07-24 17:54:16.147452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.576 [2024-07-24 17:54:16.147791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.576 [2024-07-24 17:54:16.147932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.576 [2024-07-24 17:54:16.147941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.576 [2024-07-24 17:54:16.147950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.576 [2024-07-24 17:54:16.149601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.576 [2024-07-24 17:54:16.158168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.576 [2024-07-24 17:54:16.158731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.159269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.576 [2024-07-24 17:54:16.159311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.576 [2024-07-24 17:54:16.159354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.576 [2024-07-24 17:54:16.159770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.576 [2024-07-24 17:54:16.160145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.576 [2024-07-24 17:54:16.160159] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.576 [2024-07-24 17:54:16.160172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.576 [2024-07-24 17:54:16.162793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.170635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.171185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.171716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.171757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.171801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.171880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.172010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.172019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.172028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.173652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.182527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.183119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.183627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.183667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.183700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.183909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.183995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.184004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.184012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.185701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.194308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.194905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.195451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.195493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.195527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.195892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.195991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.196000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.196009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.197800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.206239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.207057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.207605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.207646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.207679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.207976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.208097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.208108] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.208120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.209743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.218016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.218580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.219105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.219146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.219179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.219646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.220145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.220155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.220164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.221871] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.229950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.230580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.231146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.231188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.231221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.231689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.231998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.232030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.232071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.233840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.241812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.242411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.242966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.243008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.243041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.243473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.243944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.243953] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.243962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.245683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.253675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.254237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.254777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-24 17:54:16.254817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.838 [2024-07-24 17:54:16.254850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.838 [2024-07-24 17:54:16.254972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.838 [2024-07-24 17:54:16.255078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.838 [2024-07-24 17:54:16.255088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.838 [2024-07-24 17:54:16.255097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.838 [2024-07-24 17:54:16.256861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.838 [2024-07-24 17:54:16.265716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.838 [2024-07-24 17:54:16.266284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.266821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.266862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.266897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.267289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.267487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.267499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.267508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.269162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.277604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.278206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.278761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.278801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.278836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.279004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.279139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.279149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.279158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.280864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.289417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.290029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.290573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.290613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.290647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.291016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.291260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.291274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.291288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.293952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.301988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.302621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.303102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.303144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.303169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.303306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.303422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.303431] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.303444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.305087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.313736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.314321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.314876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.314916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.314949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.315426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.315653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.315663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.315671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.317390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.325437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.326070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.326592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.326633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.326667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.326917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.327016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.327025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.327033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.328778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.337398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.338014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.338558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.338600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.338635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.338913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.339025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.339034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.339049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.340770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.349214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.839 [2024-07-24 17:54:16.349738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.350210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-24 17:54:16.350252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.839 [2024-07-24 17:54:16.350286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.839 [2024-07-24 17:54:16.350754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.839 [2024-07-24 17:54:16.351068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.839 [2024-07-24 17:54:16.351101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.839 [2024-07-24 17:54:16.351131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.839 [2024-07-24 17:54:16.352916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.839 [2024-07-24 17:54:16.361072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.361679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.362115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.362130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.362141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.362269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.362420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.362430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.362440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.364305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.373071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.373652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.374153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.374194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.374228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.374695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.374997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.375022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.375032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.376740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.385214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.385847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.386318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.386359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.386393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.386810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.387271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.387312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.387320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.389023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.397034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.397634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.398126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.398167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.398201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.398378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.398477] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.398486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.398495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.400166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.408883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.409469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.410026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.410077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.410112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.410413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.410512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.410521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.410530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.412270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.420622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.421179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.421734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.421774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.421807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:54.840 [2024-07-24 17:54:16.422181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:54.840 [2024-07-24 17:54:16.422295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.840 [2024-07-24 17:54:16.422304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.840 [2024-07-24 17:54:16.422312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.840 [2024-07-24 17:54:16.423947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.840 [2024-07-24 17:54:16.432572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.840 [2024-07-24 17:54:16.433096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.433687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-24 17:54:16.433727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:54.840 [2024-07-24 17:54:16.433760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.103 [2024-07-24 17:54:16.434143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.103 [2024-07-24 17:54:16.434341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.103 [2024-07-24 17:54:16.434352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.103 [2024-07-24 17:54:16.434362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.103 [2024-07-24 17:54:16.436126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.103 [2024-07-24 17:54:16.444528] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.103 [2024-07-24 17:54:16.445116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-07-24 17:54:16.445650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-07-24 17:54:16.445690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-07-24 17:54:16.445724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.103 [2024-07-24 17:54:16.446208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.103 [2024-07-24 17:54:16.446659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.103 [2024-07-24 17:54:16.446691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.103 [2024-07-24 17:54:16.446723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.103 [2024-07-24 17:54:16.448465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.103 [2024-07-24 17:54:16.456309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.103 [2024-07-24 17:54:16.456933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-07-24 17:54:16.457435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.103 [2024-07-24 17:54:16.457484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.103 [2024-07-24 17:54:16.457518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.103 [2024-07-24 17:54:16.457888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.458252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.458284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.458316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.460026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.468100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.468646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.469120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.469163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.469199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.469579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.469692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.469701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.469710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.471519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.479919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.480488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.480939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.480979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.481012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.481516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.481602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.481612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.481621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.483184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.491852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.492425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.492884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.492926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.492968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.493348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.493747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.493761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.493775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.496128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.504288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.504815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.505424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.505466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.505500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.505869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.506286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.506319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.506349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.508161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.516135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.516710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.517253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.517297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.517334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.517705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.517877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.517888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.517897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.519673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.528134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.528639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.529060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.529102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.529136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.529569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.529708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.529718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.529729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.531476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.540167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.540775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.541169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.541211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.541245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.541448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.541509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.541519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.541528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.543269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.551995] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.552485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.552809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.552850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.552883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.553316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.553740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.553781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.553790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.104 [2024-07-24 17:54:16.555593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.104 [2024-07-24 17:54:16.563937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.104 [2024-07-24 17:54:16.564433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.564942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.104 [2024-07-24 17:54:16.564982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.104 [2024-07-24 17:54:16.565015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.104 [2024-07-24 17:54:16.565498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.104 [2024-07-24 17:54:16.565783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.104 [2024-07-24 17:54:16.565793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.104 [2024-07-24 17:54:16.565801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.567486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.575832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.576435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.576831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.576871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.576905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.577285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.577736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.577768] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.577798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.579470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.587798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.588448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.588905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.588945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.588978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.589404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.589756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.589788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.589818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.591642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.599639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.600263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.600720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.600771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.600780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.600898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.600997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.601009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.601018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.602704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.611451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.612000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.612392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.612405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.612416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.612572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.612692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.612701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.612710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.614368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.623536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.624111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.624569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.624611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.624644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.624892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.624996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.625006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.625015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.626651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.635451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.636026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.636499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.636540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.636574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.636977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.637090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.637101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.637115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.638687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.647316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.647862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.648357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.648390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.648424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.648935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.649061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.649071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.649080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.650712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.659209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.659657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.660194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.660235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.660268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.660733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.660910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.660920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.660928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.662624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.670929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.105 [2024-07-24 17:54:16.671425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.671883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.105 [2024-07-24 17:54:16.671924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.105 [2024-07-24 17:54:16.671957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.105 [2024-07-24 17:54:16.672438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.105 [2024-07-24 17:54:16.672939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.105 [2024-07-24 17:54:16.672971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.105 [2024-07-24 17:54:16.673001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.105 [2024-07-24 17:54:16.674733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.105 [2024-07-24 17:54:16.682794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.106 [2024-07-24 17:54:16.683493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.106 [2024-07-24 17:54:16.684030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.106 [2024-07-24 17:54:16.684097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.106 [2024-07-24 17:54:16.684131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.106 [2024-07-24 17:54:16.684548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.106 [2024-07-24 17:54:16.684967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.106 [2024-07-24 17:54:16.684977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.106 [2024-07-24 17:54:16.684985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.106 [2024-07-24 17:54:16.686603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.106 [2024-07-24 17:54:16.694669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.106 [2024-07-24 17:54:16.695313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.106 [2024-07-24 17:54:16.695747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.106 [2024-07-24 17:54:16.695787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.106 [2024-07-24 17:54:16.695821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.106 [2024-07-24 17:54:16.696350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.106 [2024-07-24 17:54:16.696584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.106 [2024-07-24 17:54:16.696598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.106 [2024-07-24 17:54:16.696611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.106 [2024-07-24 17:54:16.699179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.707156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.707757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.708233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.708275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.708309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.708774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.708914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.708923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.708933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.710595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.719079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.719572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.720008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.720061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.720099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.720369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.720720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.720752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.720783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.722727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.730832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.731396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.731884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.731925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.731959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.732346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.732432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.732441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.732450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.734186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.742771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.743330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.743838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.743878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.743912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.744306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.744407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.744417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.744426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.746008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.754657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.755266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.755673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.755714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.755748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.756224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.756475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.756484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.756493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.758192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.766573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.767143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.767557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.767598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.767632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.768162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.768457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.768467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.768476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.770080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.778339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.778818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.779283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.779325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.779359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.368 [2024-07-24 17:54:16.779776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.368 [2024-07-24 17:54:16.780240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.368 [2024-07-24 17:54:16.780273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.368 [2024-07-24 17:54:16.780306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.368 [2024-07-24 17:54:16.781986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.368 [2024-07-24 17:54:16.790282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.368 [2024-07-24 17:54:16.790840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.791259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.368 [2024-07-24 17:54:16.791320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.368 [2024-07-24 17:54:16.791329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.791463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.791562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.791571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.791580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.793427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.801928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.802460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.802877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.802918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.802951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.803336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.803543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.803553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.803562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.805192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.813703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.814282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.814748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.814789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.814822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.815447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.815581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.815590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.815599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.817244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.825618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.826257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.826718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.826759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.826802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.827230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.827421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.827430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.827439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.829151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.837412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.838003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.838494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.838534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.838567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.839035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.839450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.839482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.839513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.841280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.849378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.850002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.850448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.850462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.850472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.850577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.850689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.850698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.850707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.852460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.861464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.862073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.862524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.862538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.862549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.862666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.862804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.862814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.862824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.864585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.873501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.874115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.874526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.874567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.874599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.874967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.875432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.875465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.875497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.877246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.885358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.885920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.886427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.886469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.886510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.886620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.886740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.369 [2024-07-24 17:54:16.886749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.369 [2024-07-24 17:54:16.886759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.369 [2024-07-24 17:54:16.888411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.369 [2024-07-24 17:54:16.897303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.369 [2024-07-24 17:54:16.897866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.898331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.369 [2024-07-24 17:54:16.898353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.369 [2024-07-24 17:54:16.898368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.369 [2024-07-24 17:54:16.898575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.369 [2024-07-24 17:54:16.898733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.898747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.898760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.901405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.370 [2024-07-24 17:54:16.909535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.370 [2024-07-24 17:54:16.910152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.910551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.910599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.370 [2024-07-24 17:54:16.910609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.370 [2024-07-24 17:54:16.910777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.370 [2024-07-24 17:54:16.910896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.910906] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.910915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.912782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.370 [2024-07-24 17:54:16.921380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.370 [2024-07-24 17:54:16.921979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.922454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.922496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.370 [2024-07-24 17:54:16.922529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.370 [2024-07-24 17:54:16.922882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.370 [2024-07-24 17:54:16.922995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.923004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.923013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.924747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.370 [2024-07-24 17:54:16.933302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.370 [2024-07-24 17:54:16.933899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.934447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.934488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.370 [2024-07-24 17:54:16.934523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.370 [2024-07-24 17:54:16.934988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.370 [2024-07-24 17:54:16.935501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.935554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.935562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.937234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.370 [2024-07-24 17:54:16.945227] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.370 [2024-07-24 17:54:16.945872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.946390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.946435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.370 [2024-07-24 17:54:16.946445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.370 [2024-07-24 17:54:16.946564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.370 [2024-07-24 17:54:16.946663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.946672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.946680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.948387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.370 [2024-07-24 17:54:16.957036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.370 [2024-07-24 17:54:16.957593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.958060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.370 [2024-07-24 17:54:16.958102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.370 [2024-07-24 17:54:16.958136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.370 [2024-07-24 17:54:16.958405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.370 [2024-07-24 17:54:16.958536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.370 [2024-07-24 17:54:16.958550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.370 [2024-07-24 17:54:16.958563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.370 [2024-07-24 17:54:16.961190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.632 [2024-07-24 17:54:16.969856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.632 [2024-07-24 17:54:16.970437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.970937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.970977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.632 [2024-07-24 17:54:16.971011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.632 [2024-07-24 17:54:16.971296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.632 [2024-07-24 17:54:16.971520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.632 [2024-07-24 17:54:16.971529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.632 [2024-07-24 17:54:16.971543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.632 [2024-07-24 17:54:16.973344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.632 [2024-07-24 17:54:16.981759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.632 [2024-07-24 17:54:16.982384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.982938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.982978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.632 [2024-07-24 17:54:16.983011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.632 [2024-07-24 17:54:16.983494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.632 [2024-07-24 17:54:16.983939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.632 [2024-07-24 17:54:16.983948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.632 [2024-07-24 17:54:16.983957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.632 [2024-07-24 17:54:16.985549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.632 [2024-07-24 17:54:16.993570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.632 [2024-07-24 17:54:16.994115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.994620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:16.994663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.632 [2024-07-24 17:54:16.994698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.632 [2024-07-24 17:54:16.995083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.632 [2024-07-24 17:54:16.995584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.632 [2024-07-24 17:54:16.995617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.632 [2024-07-24 17:54:16.995647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.632 [2024-07-24 17:54:16.997357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.632 [2024-07-24 17:54:17.005443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.632 [2024-07-24 17:54:17.005966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:17.006509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:17.006551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.632 [2024-07-24 17:54:17.006584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.632 [2024-07-24 17:54:17.006869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.632 [2024-07-24 17:54:17.006954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.632 [2024-07-24 17:54:17.006963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.632 [2024-07-24 17:54:17.006972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.632 [2024-07-24 17:54:17.008750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.632 [2024-07-24 17:54:17.017422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.632 [2024-07-24 17:54:17.017898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:17.018382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.632 [2024-07-24 17:54:17.018423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.632 [2024-07-24 17:54:17.018457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.632 [2024-07-24 17:54:17.018696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.632 [2024-07-24 17:54:17.018796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.632 [2024-07-24 17:54:17.018805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.632 [2024-07-24 17:54:17.018814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.632 [2024-07-24 17:54:17.020345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.029204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.029745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.030259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.030271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.030281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.030372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.030498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.030507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.030516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.032153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.041078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.041655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.042168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.042210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.042244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.042709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.042912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.042921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.042929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.044526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.052853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.053474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.053990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.054029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.054079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.054497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.054709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.054718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.054727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.056341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.064753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.065415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.065760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.065772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.065782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.065901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.066013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.066023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.066031] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.067688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.076753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.077414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.077959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.078004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.078014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.078139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.078295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.078304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.078313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.080089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.088493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.088973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.089373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.089415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.089448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.089826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.089984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.089998] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.090011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.092661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.100905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.101570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.102105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.102148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.102181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.102600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.102902] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.102933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.102964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.104986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.112764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.113407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.113872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.113884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.113895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.114006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.114178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.114189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.114199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.116206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.124773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.125438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.125921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.633 [2024-07-24 17:54:17.125970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.633 [2024-07-24 17:54:17.126003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.633 [2024-07-24 17:54:17.126384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.633 [2024-07-24 17:54:17.126884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.633 [2024-07-24 17:54:17.126916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.633 [2024-07-24 17:54:17.126946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.633 [2024-07-24 17:54:17.128680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.633 [2024-07-24 17:54:17.136880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.633 [2024-07-24 17:54:17.137527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.137797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.137837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.137870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.138262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.138368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.138377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.138386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.140194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.148845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.149471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.149976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.150017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.150065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.150534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.150802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.150816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.150829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.153432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.161125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.161745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.162160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.162174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.162188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.162325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.162427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.162436] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.162445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.164104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.172927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.173515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.174059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.174099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.174132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.174649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.175063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.175095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.175126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.176852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.184813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.185430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.185937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.185976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.186008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.186134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.186262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.186271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.186279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.188069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.196638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.197247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.197778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.197818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.197851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.198234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.198334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.198343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.198352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.200188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.208432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.208968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.209510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.209551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.209586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.209856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.210055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.210064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.210073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.211847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.634 [2024-07-24 17:54:17.220319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.634 [2024-07-24 17:54:17.220930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.221388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.634 [2024-07-24 17:54:17.221430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.634 [2024-07-24 17:54:17.221464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.634 [2024-07-24 17:54:17.221621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.634 [2024-07-24 17:54:17.221733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.634 [2024-07-24 17:54:17.221742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.634 [2024-07-24 17:54:17.221751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.634 [2024-07-24 17:54:17.223563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.896 [2024-07-24 17:54:17.232180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.896 [2024-07-24 17:54:17.232758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.233154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.233196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.896 [2024-07-24 17:54:17.233230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.896 [2024-07-24 17:54:17.233461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.896 [2024-07-24 17:54:17.233554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.896 [2024-07-24 17:54:17.233564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.896 [2024-07-24 17:54:17.233574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.896 [2024-07-24 17:54:17.235225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.896 [2024-07-24 17:54:17.244160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.896 [2024-07-24 17:54:17.244686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.245210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.245252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.896 [2024-07-24 17:54:17.245284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.896 [2024-07-24 17:54:17.245482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.896 [2024-07-24 17:54:17.245571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.896 [2024-07-24 17:54:17.245581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.896 [2024-07-24 17:54:17.245590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.896 [2024-07-24 17:54:17.247381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.896 [2024-07-24 17:54:17.256001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.896 [2024-07-24 17:54:17.256625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.257029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.257075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.896 [2024-07-24 17:54:17.257085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.896 [2024-07-24 17:54:17.257191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.896 [2024-07-24 17:54:17.257304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.896 [2024-07-24 17:54:17.257313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.896 [2024-07-24 17:54:17.257322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.896 [2024-07-24 17:54:17.259068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.896 [2024-07-24 17:54:17.267798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.896 [2024-07-24 17:54:17.268357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.268883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.896 [2024-07-24 17:54:17.268924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.896 [2024-07-24 17:54:17.268958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.896 [2024-07-24 17:54:17.269441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.896 [2024-07-24 17:54:17.269513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.269525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.269534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.271241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.279804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.280443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.280950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.280990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.281024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.281389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.281502] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.281512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.281520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.283211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.291570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.292208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.292730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.292770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.292803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.293222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.293335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.293344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.293354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.295002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.303464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.304091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.304615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.304654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.304687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.304904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.305378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.305411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.305453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.307302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.315257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.315820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.316357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.316405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.316414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.316534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.316647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.316656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.316664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.318471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.327095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.327735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.328260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.328303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.328336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.328679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.328764] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.328773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.328781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.330514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.338975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.339589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.340049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.340062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.897 [2024-07-24 17:54:17.340072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.897 [2024-07-24 17:54:17.340204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.897 [2024-07-24 17:54:17.340316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.897 [2024-07-24 17:54:17.340326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.897 [2024-07-24 17:54:17.340334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.897 [2024-07-24 17:54:17.341990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.897 [2024-07-24 17:54:17.350863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.897 [2024-07-24 17:54:17.351440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.897 [2024-07-24 17:54:17.351905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.351945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.351979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.352408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.352708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.352717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.352726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.355095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.363494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.364128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.364582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.364595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.364606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.364719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.364854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.364863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.364873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.366903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.375745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.376373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.376807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.376848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.376882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.377264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.377571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.377580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.377590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.379403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.387476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.388126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.388655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.388695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.388729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.389157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.389557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.389589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.389620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.391222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.399370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.399725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.400258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.400300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.400333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.400751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.400979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.400988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.400997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.402784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.411174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.411723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.412129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.412171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.412205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.412549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.412676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.412685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.898 [2024-07-24 17:54:17.412694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.898 [2024-07-24 17:54:17.414322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.898 [2024-07-24 17:54:17.423035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.898 [2024-07-24 17:54:17.423632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.424126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.898 [2024-07-24 17:54:17.424138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.898 [2024-07-24 17:54:17.424148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.898 [2024-07-24 17:54:17.424254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.898 [2024-07-24 17:54:17.424367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.898 [2024-07-24 17:54:17.424376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.424385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.426031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.899 [2024-07-24 17:54:17.434880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.899 [2024-07-24 17:54:17.435475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.435934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.435974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.899 [2024-07-24 17:54:17.436008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.899 [2024-07-24 17:54:17.436388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.899 [2024-07-24 17:54:17.436647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.899 [2024-07-24 17:54:17.436657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.436665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.438389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.899 [2024-07-24 17:54:17.446691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.899 [2024-07-24 17:54:17.447059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.447553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.447593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.899 [2024-07-24 17:54:17.447627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.899 [2024-07-24 17:54:17.447797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.899 [2024-07-24 17:54:17.447896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.899 [2024-07-24 17:54:17.447905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.447914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.449705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.899 [2024-07-24 17:54:17.458497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.899 [2024-07-24 17:54:17.459148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.459685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.459726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.899 [2024-07-24 17:54:17.459759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.899 [2024-07-24 17:54:17.460104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.899 [2024-07-24 17:54:17.460204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.899 [2024-07-24 17:54:17.460213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.460222] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.461787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.899 [2024-07-24 17:54:17.470275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.899 [2024-07-24 17:54:17.470897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.471414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.471458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.899 [2024-07-24 17:54:17.471491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.899 [2024-07-24 17:54:17.471902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.899 [2024-07-24 17:54:17.472029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.899 [2024-07-24 17:54:17.472038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.472052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.473782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.899 [2024-07-24 17:54:17.482056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.899 [2024-07-24 17:54:17.482685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.483213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.899 [2024-07-24 17:54:17.483255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:55.899 [2024-07-24 17:54:17.483289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:55.899 [2024-07-24 17:54:17.483609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:55.899 [2024-07-24 17:54:17.483910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.899 [2024-07-24 17:54:17.483919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.899 [2024-07-24 17:54:17.483928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.899 [2024-07-24 17:54:17.485912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.164 [2024-07-24 17:54:17.494736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.164 [2024-07-24 17:54:17.495353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.495886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.495926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.164 [2024-07-24 17:54:17.495968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.164 [2024-07-24 17:54:17.496208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.164 [2024-07-24 17:54:17.496325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.164 [2024-07-24 17:54:17.496334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.164 [2024-07-24 17:54:17.496342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.164 [2024-07-24 17:54:17.498054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.164 [2024-07-24 17:54:17.506820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.164 [2024-07-24 17:54:17.507475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.507665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.507678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.164 [2024-07-24 17:54:17.507689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.164 [2024-07-24 17:54:17.507849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.164 [2024-07-24 17:54:17.507942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.164 [2024-07-24 17:54:17.507952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.164 [2024-07-24 17:54:17.507962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.164 [2024-07-24 17:54:17.509796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.164 [2024-07-24 17:54:17.519001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.164 [2024-07-24 17:54:17.519647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.520097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.520111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.164 [2024-07-24 17:54:17.520121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.164 [2024-07-24 17:54:17.520266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.164 [2024-07-24 17:54:17.520403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.164 [2024-07-24 17:54:17.520413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.164 [2024-07-24 17:54:17.520423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.164 [2024-07-24 17:54:17.522225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.164 [2024-07-24 17:54:17.530878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.164 [2024-07-24 17:54:17.531539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.532056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.532097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.164 [2024-07-24 17:54:17.532131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.164 [2024-07-24 17:54:17.532533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.164 [2024-07-24 17:54:17.532623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.164 [2024-07-24 17:54:17.532632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.164 [2024-07-24 17:54:17.532641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.164 [2024-07-24 17:54:17.534432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.164 [2024-07-24 17:54:17.542515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.164 [2024-07-24 17:54:17.543156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.543683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.164 [2024-07-24 17:54:17.543723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.164 [2024-07-24 17:54:17.543757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.164 [2024-07-24 17:54:17.544145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.164 [2024-07-24 17:54:17.544235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.164 [2024-07-24 17:54:17.544245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.164 [2024-07-24 17:54:17.544254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.546080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.554361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.554997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.555527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.555568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.555601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.556132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.556361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.556370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.556378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.557957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.566304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.566911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.567415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.567466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.567499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.567917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.568133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.568143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.568151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.569899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.578292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.578841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.579365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.579381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.579391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.579511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.579596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.579605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.579614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.581244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.590235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.590812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.591174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.591189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.591198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.591307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.591435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.591444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.591453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.593019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.602070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.602721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.603130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.603171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.603205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.603576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.603929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.603969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.604001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.606131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.613853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.614494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.614897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.614910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.614921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.615066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.615233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.615244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.615254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.617061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.626064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.626697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.627109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.627151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.627185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.627701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.628066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.628110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.628140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.630040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.638210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.638758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.639237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.639279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.639313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.639780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.640194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.640226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.640276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.642146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.650072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.650659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.651120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.651161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.651195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.651428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.651557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.651567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.651575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.653188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.661879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.662463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.663001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.663041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.663084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.663428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.663526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.663535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.663544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.665438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.673741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.674336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.674849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.674888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.674921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.675350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.675566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.675575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.675584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.677283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.685535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.686165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.686667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.686707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.686741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.687174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.687630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.687639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.687648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.689395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.697300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.697915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.698435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.698449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.698460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.698581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.698679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.698688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.698697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.700391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.709131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.709728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.710174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.710215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.710249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.710426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.710526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.710535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.710543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.712144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.720982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.721582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.722108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.165 [2024-07-24 17:54:17.722150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.165 [2024-07-24 17:54:17.722183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.165 [2024-07-24 17:54:17.722576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.165 [2024-07-24 17:54:17.722689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.165 [2024-07-24 17:54:17.722698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.165 [2024-07-24 17:54:17.722707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.165 [2024-07-24 17:54:17.724441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.165 [2024-07-24 17:54:17.732928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.165 [2024-07-24 17:54:17.733566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.733973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.734013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-24 17:54:17.734063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.166 [2024-07-24 17:54:17.734483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.166 [2024-07-24 17:54:17.734730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.166 [2024-07-24 17:54:17.734739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.166 [2024-07-24 17:54:17.734747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.166 [2024-07-24 17:54:17.736561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.166 [2024-07-24 17:54:17.744827] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.166 [2024-07-24 17:54:17.745436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.745942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.745982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-24 17:54:17.746015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.166 [2024-07-24 17:54:17.746408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.166 [2024-07-24 17:54:17.746535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.166 [2024-07-24 17:54:17.746544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.166 [2024-07-24 17:54:17.746553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.166 [2024-07-24 17:54:17.748447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.166 [2024-07-24 17:54:17.757017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.166 [2024-07-24 17:54:17.757665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.758065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.166 [2024-07-24 17:54:17.758105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.166 [2024-07-24 17:54:17.758139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.166 [2024-07-24 17:54:17.758605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.166 [2024-07-24 17:54:17.758864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.166 [2024-07-24 17:54:17.758873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.166 [2024-07-24 17:54:17.758883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.427 [2024-07-24 17:54:17.760761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.427 [2024-07-24 17:54:17.768880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.427 [2024-07-24 17:54:17.769479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.427 [2024-07-24 17:54:17.769747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.427 [2024-07-24 17:54:17.769787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.427 [2024-07-24 17:54:17.769821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.427 [2024-07-24 17:54:17.770347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.427 [2024-07-24 17:54:17.770751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.427 [2024-07-24 17:54:17.770783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.427 [2024-07-24 17:54:17.770821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.427 [2024-07-24 17:54:17.772615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.427 [2024-07-24 17:54:17.780800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.427 [2024-07-24 17:54:17.781406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.427 [2024-07-24 17:54:17.781883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.781923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.781955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.782340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.782601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.782610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.782619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.784292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.792739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.793362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.793901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.793942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.793976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.794413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.794607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.794616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.794625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.796318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.804685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.805329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.805830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.805870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.805904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.806286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.806535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.806544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.806553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.808275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.816505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.817064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.817587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.817627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.817661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.817878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.818083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.818097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.818111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.820627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.828643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.829278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.829806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.829846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.829890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.830321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.830673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.830704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.830735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.832681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.840495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.841062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.841409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.841449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.841483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.841679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.841792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.841801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.841810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.843505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.852357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.852859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.853386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.853427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.853461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.853828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.853987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.853996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.854005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.855524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.864228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.864855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.865324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.865364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.865398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.865884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.866007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.866017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.866027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.867680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.876434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.877034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.877528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.877573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.877583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.428 [2024-07-24 17:54:17.877694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.428 [2024-07-24 17:54:17.877798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.428 [2024-07-24 17:54:17.877807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.428 [2024-07-24 17:54:17.877817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.428 [2024-07-24 17:54:17.879609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.428 [2024-07-24 17:54:17.888281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.428 [2024-07-24 17:54:17.888830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.889235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.428 [2024-07-24 17:54:17.889277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.428 [2024-07-24 17:54:17.889311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.889679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.889931] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.889969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.889978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.891685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.900104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.900751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.901205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.901247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.901280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.901648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.901892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.901901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.901910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.903590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.911936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.912561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.913113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.913155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.913188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.913658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.914033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.914049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.914059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.915804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.923737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.924196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.924652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.924694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.924727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.924848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.924947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.924957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.924966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.926702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.935695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.936280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.936744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.936785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.936818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.936952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.937057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.937070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.937079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.938753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.947534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.948033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.948399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.948440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.948474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.948940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.949372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.949387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.949401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.952051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.959942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.960561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.961041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.961059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.961069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.961197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.961306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.961315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.961325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.963068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.972066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.972635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.973098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.973140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.973173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.973355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.973463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.973472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.973486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.975274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.984012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.984540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.984991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.985032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.985083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.985499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.985953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.429 [2024-07-24 17:54:17.985986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.429 [2024-07-24 17:54:17.986017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.429 [2024-07-24 17:54:17.987752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.429 [2024-07-24 17:54:17.995883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.429 [2024-07-24 17:54:17.996396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.996804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.429 [2024-07-24 17:54:17.996844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.429 [2024-07-24 17:54:17.996878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.429 [2024-07-24 17:54:17.997356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.429 [2024-07-24 17:54:17.997484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.430 [2024-07-24 17:54:17.997493] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.430 [2024-07-24 17:54:17.997502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.430 [2024-07-24 17:54:17.999170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.430 [2024-07-24 17:54:18.007744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.430 [2024-07-24 17:54:18.008356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.430 [2024-07-24 17:54:18.008816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.430 [2024-07-24 17:54:18.008856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.430 [2024-07-24 17:54:18.008889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.430 [2024-07-24 17:54:18.009329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.430 [2024-07-24 17:54:18.009430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.430 [2024-07-24 17:54:18.009439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.430 [2024-07-24 17:54:18.009448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.430 [2024-07-24 17:54:18.010951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.430 [2024-07-24 17:54:18.019688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.430 [2024-07-24 17:54:18.020193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.430 [2024-07-24 17:54:18.020555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.430 [2024-07-24 17:54:18.020596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.430 [2024-07-24 17:54:18.020629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.430 [2024-07-24 17:54:18.021208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.430 [2024-07-24 17:54:18.021428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.430 [2024-07-24 17:54:18.021438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.430 [2024-07-24 17:54:18.021447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.430 [2024-07-24 17:54:18.023303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.031774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.032249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.032657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.032697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.032730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.033162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.033277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.033286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.033295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.034943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.043665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.044169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.044584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.044624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.044657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.044975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.045240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.045249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.045258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.046795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.055575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.056021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.056511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.056552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.056585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.057069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.057514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.057523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.057532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.059102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.067269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.067902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.068357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.068405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.068415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.068519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.068647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.068656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.068665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.070416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.079040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.079535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.079978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.080018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.080065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.080497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.080638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.080647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.080656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.083286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.091473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.091951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.092380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.092423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.092457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.092630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.092745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.092755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.092764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.094419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.103440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.104025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.104472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.104509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.104519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.104644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.104793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.692 [2024-07-24 17:54:18.104803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.692 [2024-07-24 17:54:18.104812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.692 [2024-07-24 17:54:18.106449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.692 [2024-07-24 17:54:18.115300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.692 [2024-07-24 17:54:18.116073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.116485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.692 [2024-07-24 17:54:18.116526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.692 [2024-07-24 17:54:18.116560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.692 [2024-07-24 17:54:18.117141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.692 [2024-07-24 17:54:18.117551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.117561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.117571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.119328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.127190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.127748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.128171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.128213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.128247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.128399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.128548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.128557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.128566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.130425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.139210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.139683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.140222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.140264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.140298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.140601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.140736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.140745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.140769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.143230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.151658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.152222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.152733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.152773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.152810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.153239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.153641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.153665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.153674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.155315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.163650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.164218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.164677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.164718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.164761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.165242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.165701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.165710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.165719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.167319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.175426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.176009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.176364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.176408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.176443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.176763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.177125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.177157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.177188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.179223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.187118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.187573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.188000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.188039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.188090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.188361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.188762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.188794] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.188825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.190451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.198889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.199462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.199871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.199911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.199946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.200187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.200303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.200312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.200321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.201865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.210692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.211295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.211713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.211764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.211774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.211892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.212005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.212014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.212023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.693 [2024-07-24 17:54:18.213679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.693 [2024-07-24 17:54:18.222638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.693 [2024-07-24 17:54:18.223255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.223667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.693 [2024-07-24 17:54:18.223708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.693 [2024-07-24 17:54:18.223742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.693 [2024-07-24 17:54:18.224220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.693 [2024-07-24 17:54:18.224622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.693 [2024-07-24 17:54:18.224654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.693 [2024-07-24 17:54:18.224684] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.226401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.694 [2024-07-24 17:54:18.234637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.694 [2024-07-24 17:54:18.235158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.235572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.235612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.694 [2024-07-24 17:54:18.235647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.694 [2024-07-24 17:54:18.235839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.694 [2024-07-24 17:54:18.235952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.694 [2024-07-24 17:54:18.235962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.694 [2024-07-24 17:54:18.235971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.237796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.694 [2024-07-24 17:54:18.246536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.694 [2024-07-24 17:54:18.247100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.247499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.247512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.694 [2024-07-24 17:54:18.247522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.694 [2024-07-24 17:54:18.247633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.694 [2024-07-24 17:54:18.247752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.694 [2024-07-24 17:54:18.247761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.694 [2024-07-24 17:54:18.247770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.249466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.694 [2024-07-24 17:54:18.258444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.694 [2024-07-24 17:54:18.259070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.259532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.259572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.694 [2024-07-24 17:54:18.259605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.694 [2024-07-24 17:54:18.260021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.694 [2024-07-24 17:54:18.260438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.694 [2024-07-24 17:54:18.260471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.694 [2024-07-24 17:54:18.260501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.262536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.694 [2024-07-24 17:54:18.270551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.694 [2024-07-24 17:54:18.271302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.271788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.271830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.694 [2024-07-24 17:54:18.271863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.694 [2024-07-24 17:54:18.272249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.694 [2024-07-24 17:54:18.272564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.694 [2024-07-24 17:54:18.272579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.694 [2024-07-24 17:54:18.272592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.275402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.694 [2024-07-24 17:54:18.283108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.694 [2024-07-24 17:54:18.283581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.284088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.694 [2024-07-24 17:54:18.284129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.694 [2024-07-24 17:54:18.284163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.694 [2024-07-24 17:54:18.284533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.694 [2024-07-24 17:54:18.284851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.694 [2024-07-24 17:54:18.284860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.694 [2024-07-24 17:54:18.284870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.694 [2024-07-24 17:54:18.286733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.295066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.295607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.296206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.296220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.296230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.296385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.296489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.296498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.296508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.298303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.306910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.307505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.308035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.308088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.308121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.308488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.308987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.309019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.309084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.310799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.318924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.319442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.319850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.319891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.319925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.320356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.320857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.320889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.320920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.322641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.330823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.331453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.331936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.331975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.332008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.332166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.332252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.332261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.332270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.333917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.342626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.343270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.343676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.343716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.343749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.344145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.344265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.344275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.344288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.345964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 777466 Killed "${NVMF_APP[@]}" "$@" 00:28:56.957 17:54:18 -- host/bdevperf.sh@36 -- # tgt_init 00:28:56.957 17:54:18 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:56.957 17:54:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:56.957 17:54:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:56.957 17:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:56.957 [2024-07-24 17:54:18.354614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.355282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.355737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.355751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.355761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.355904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.356047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.356058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.356068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 17:54:18 -- nvmf/common.sh@469 -- # nvmfpid=778913 00:28:56.957 17:54:18 -- nvmf/common.sh@470 -- # waitforlisten 778913 00:28:56.957 17:54:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:56.957 17:54:18 -- common/autotest_common.sh@819 -- # '[' -z 778913 ']' 00:28:56.957 17:54:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.957 17:54:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:56.957 17:54:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.957 [2024-07-24 17:54:18.357849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 17:54:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:56.957 17:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:56.957 [2024-07-24 17:54:18.366686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.367305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.367662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.367677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.367688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.367836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.957 [2024-07-24 17:54:18.367989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.957 [2024-07-24 17:54:18.367999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.957 [2024-07-24 17:54:18.368009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.957 [2024-07-24 17:54:18.369878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.957 [2024-07-24 17:54:18.378663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.957 [2024-07-24 17:54:18.379207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.379611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.957 [2024-07-24 17:54:18.379624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.957 [2024-07-24 17:54:18.379635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.957 [2024-07-24 17:54:18.379734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.379858] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.379868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.379877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.381742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.390679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.391297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.391700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.391714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.391725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.391884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.392023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.392033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.392050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.393670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.401201] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:56.958 [2024-07-24 17:54:18.401239] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.958 [2024-07-24 17:54:18.402688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.403328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.403755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.403769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.403780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.403905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.404040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.404056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.404070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.405948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.414688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.415356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.415831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.415844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.415854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.415979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.416120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.416138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.416147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.418008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.426595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.427214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.958 [2024-07-24 17:54:18.427553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.427566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.427576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.427687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.427821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.427830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.427840] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.429664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.438471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.439079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.439529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.439542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.439553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.439692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.439826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.439836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.439845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.441771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.450584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.451177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.451649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.451663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.451673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.451797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.451888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.451897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.451907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.453638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.458470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:56.958 [2024-07-24 17:54:18.462473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.463109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.463445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.463458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.463470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.463568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.463689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.463699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.463711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.465390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.474429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.475053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.475535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.475549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.475560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.475702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.958 [2024-07-24 17:54:18.475837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.958 [2024-07-24 17:54:18.475847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.958 [2024-07-24 17:54:18.475856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.958 [2024-07-24 17:54:18.477705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.958 [2024-07-24 17:54:18.486395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.958 [2024-07-24 17:54:18.486987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.487436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.958 [2024-07-24 17:54:18.487450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.958 [2024-07-24 17:54:18.487460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.958 [2024-07-24 17:54:18.487601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.487720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.487731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.487741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.489517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.959 [2024-07-24 17:54:18.498344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.959 [2024-07-24 17:54:18.498980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.499455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.499470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.959 [2024-07-24 17:54:18.499482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.959 [2024-07-24 17:54:18.499612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.499778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.499788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.499799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.501520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.959 [2024-07-24 17:54:18.510510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.959 [2024-07-24 17:54:18.511135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.511607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.511621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.959 [2024-07-24 17:54:18.511632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.959 [2024-07-24 17:54:18.511759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.511864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.511874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.511884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.513695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.959 [2024-07-24 17:54:18.522447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.959 [2024-07-24 17:54:18.523093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.523440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.523453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.959 [2024-07-24 17:54:18.523464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.959 [2024-07-24 17:54:18.523561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.523666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.523676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.523686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.525423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.959 [2024-07-24 17:54:18.531664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:56.959 [2024-07-24 17:54:18.531758] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.959 [2024-07-24 17:54:18.531766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.959 [2024-07-24 17:54:18.531772] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.959 [2024-07-24 17:54:18.531809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.959 [2024-07-24 17:54:18.531896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.959 [2024-07-24 17:54:18.531897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.959 [2024-07-24 17:54:18.534535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.959 [2024-07-24 17:54:18.535161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.535639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.535652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.959 [2024-07-24 17:54:18.535665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.959 [2024-07-24 17:54:18.535798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.535954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.535964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.535975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.537594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.959 [2024-07-24 17:54:18.546729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.959 [2024-07-24 17:54:18.547398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.547852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.959 [2024-07-24 17:54:18.547867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:56.959 [2024-07-24 17:54:18.547879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:56.959 [2024-07-24 17:54:18.547998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:56.959 [2024-07-24 17:54:18.548101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.959 [2024-07-24 17:54:18.548112] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.959 [2024-07-24 17:54:18.548123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.959 [2024-07-24 17:54:18.549923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.221 [2024-07-24 17:54:18.558805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.221 [2024-07-24 17:54:18.559443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.559850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.559864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.221 [2024-07-24 17:54:18.559876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.221 [2024-07-24 17:54:18.560039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.221 [2024-07-24 17:54:18.560198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.221 [2024-07-24 17:54:18.560209] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.221 [2024-07-24 17:54:18.560219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.221 [2024-07-24 17:54:18.562201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.221 [2024-07-24 17:54:18.570944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.221 [2024-07-24 17:54:18.571590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.572089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.572104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.221 [2024-07-24 17:54:18.572116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.221 [2024-07-24 17:54:18.572277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.221 [2024-07-24 17:54:18.572385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.221 [2024-07-24 17:54:18.572395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.221 [2024-07-24 17:54:18.572406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.221 [2024-07-24 17:54:18.574089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.221 [2024-07-24 17:54:18.583086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.221 [2024-07-24 17:54:18.583652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.584151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.221 [2024-07-24 17:54:18.584165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.221 [2024-07-24 17:54:18.584178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.221 [2024-07-24 17:54:18.584309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.584447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.584463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.584474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.586256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.595054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.595634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.596063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.596077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.596088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.596235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.596344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.596355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.596365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.598329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.606979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.607617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.608112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.608126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.608137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.608252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.608361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.608371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.608380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.610256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.619327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.619921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.620396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.620411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.620422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.620537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.620675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.620685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.620699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.622606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.631391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.631998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.632468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.632481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.632493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.632607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.632730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.632740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.632750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.634473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.643371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.643958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.644450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.644464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.644475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.644635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.644773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.644783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.644793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.646545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.655484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.656159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.656561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.656576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.656586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.656715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.656853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.656863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.656872] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.658826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.667525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.668179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.668629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.668643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.668654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.668800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.668894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.668904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.668914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.670702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.679411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.679951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.680362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.680376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.680388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.680547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.680655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.222 [2024-07-24 17:54:18.680665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.222 [2024-07-24 17:54:18.680675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.222 [2024-07-24 17:54:18.682401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.222 [2024-07-24 17:54:18.691421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.222 [2024-07-24 17:54:18.691959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.692353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.222 [2024-07-24 17:54:18.692367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.222 [2024-07-24 17:54:18.692378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.222 [2024-07-24 17:54:18.692493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.222 [2024-07-24 17:54:18.692647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.692658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.692668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.694516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.703540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.704073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.704506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.704518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.704529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.704643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.704797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.704807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.704816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.706709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.715601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.716186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.716660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.716673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.716684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.716797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.716905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.716915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.716924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.718665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.727538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.728141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.728644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.728658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.728668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.728782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.728890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.728899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.728909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.730892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.739630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.740262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.740665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.740679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.740689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.740820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.740943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.740952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.740962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.742789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.751782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.752364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.752838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.752851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.752862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.752978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.753091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.753101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.753110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.754981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.763760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.764362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.764861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.764874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.764885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.765028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.765125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.765136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.765145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.767124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.775695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.776257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.776732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.776748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.776758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.776872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.777040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.777054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.777064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.778919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.787792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.788320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.788791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.788804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.788814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.788944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.789037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.789051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.789061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.790888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.223 [2024-07-24 17:54:18.799804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.223 [2024-07-24 17:54:18.800325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.800828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.223 [2024-07-24 17:54:18.800841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.223 [2024-07-24 17:54:18.800851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.223 [2024-07-24 17:54:18.800981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.223 [2024-07-24 17:54:18.801092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.223 [2024-07-24 17:54:18.801103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.223 [2024-07-24 17:54:18.801112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.223 [2024-07-24 17:54:18.802802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.224 [2024-07-24 17:54:18.811779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.224 [2024-07-24 17:54:18.812353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.224 [2024-07-24 17:54:18.812795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.224 [2024-07-24 17:54:18.812809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.224 [2024-07-24 17:54:18.812823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.224 [2024-07-24 17:54:18.812953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.224 [2024-07-24 17:54:18.813096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.224 [2024-07-24 17:54:18.813106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.224 [2024-07-24 17:54:18.813115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.224 [2024-07-24 17:54:18.814880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.485 [2024-07-24 17:54:18.823720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.485 [2024-07-24 17:54:18.824287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.824783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.824796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.485 [2024-07-24 17:54:18.824807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.485 [2024-07-24 17:54:18.824922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.485 [2024-07-24 17:54:18.825016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.485 [2024-07-24 17:54:18.825026] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.485 [2024-07-24 17:54:18.825035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.485 [2024-07-24 17:54:18.826641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.485 [2024-07-24 17:54:18.835733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.485 [2024-07-24 17:54:18.836376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.836873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.836886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.485 [2024-07-24 17:54:18.836897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.485 [2024-07-24 17:54:18.837040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.485 [2024-07-24 17:54:18.837152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.485 [2024-07-24 17:54:18.837162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.485 [2024-07-24 17:54:18.837171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.485 [2024-07-24 17:54:18.838919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.485 [2024-07-24 17:54:18.847865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.485 [2024-07-24 17:54:18.848462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.848888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.848901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.485 [2024-07-24 17:54:18.848912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.485 [2024-07-24 17:54:18.849029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.485 [2024-07-24 17:54:18.849170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.485 [2024-07-24 17:54:18.849180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.485 [2024-07-24 17:54:18.849190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.485 [2024-07-24 17:54:18.850939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.485 [2024-07-24 17:54:18.859910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.485 [2024-07-24 17:54:18.860505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.860903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.860917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.485 [2024-07-24 17:54:18.860928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.485 [2024-07-24 17:54:18.861028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.485 [2024-07-24 17:54:18.861141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.485 [2024-07-24 17:54:18.861151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.485 [2024-07-24 17:54:18.861160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.485 [2024-07-24 17:54:18.862958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.485 [2024-07-24 17:54:18.871879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.485 [2024-07-24 17:54:18.872522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.872994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.485 [2024-07-24 17:54:18.873007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.485 [2024-07-24 17:54:18.873018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.485 [2024-07-24 17:54:18.873152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.485 [2024-07-24 17:54:18.873291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.873300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.873310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.875139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.883856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.884386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.884881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.884895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.884906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.885074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.885201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.885212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.885221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.887001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.895947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.896538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.897023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.897036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.897051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.897165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.897287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.897297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.897307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.899257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.907858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.908447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.908917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.908930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.908941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.909074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.909167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.909177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.909186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.910906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.919822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.920368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.920870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.920883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.920894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.921038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.921180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.921193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.921203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.923195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.931803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.932417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.932912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.932925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.932936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.933053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.933176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.933186] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.933195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.934929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.943811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.944398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.944895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.944908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.944919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.945051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.945190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.945200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.945210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.947033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.955702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.956309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.956811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.956824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.956835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.957010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.957151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.486 [2024-07-24 17:54:18.957161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.486 [2024-07-24 17:54:18.957175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.486 [2024-07-24 17:54:18.959245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.486 [2024-07-24 17:54:18.967694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.486 [2024-07-24 17:54:18.968291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.968739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.486 [2024-07-24 17:54:18.968752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.486 [2024-07-24 17:54:18.968763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.486 [2024-07-24 17:54:18.968892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.486 [2024-07-24 17:54:18.969015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:18.969025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:18.969034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:18.970980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:18.979722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:18.980320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:18.980733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:18.980746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:18.980757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:18.980885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:18.980993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:18.981003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:18.981012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:18.982781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:18.991860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:18.992434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:18.992930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:18.992943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:18.992954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:18.993116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:18.993224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:18.993234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:18.993243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:18.995088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.003831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.004431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.004904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.004918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.004928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.005046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.005185] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.005195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.005204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.007077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.015714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.016340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.016840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.016853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.016863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.016977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.017103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.017114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.017124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.018845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.027720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.028390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.028859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.028873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.028884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.028998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.029124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.029134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.029144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.030911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.039469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.040030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.040461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.040475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.040486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.040599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.040708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.040719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.040728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.042573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.051416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.052033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.052483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.052497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.052508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.052652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.052805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.052814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.052824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.054624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.063499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.064133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.064525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.064538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.064548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.064692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.064784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.487 [2024-07-24 17:54:19.064793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.487 [2024-07-24 17:54:19.064803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.487 [2024-07-24 17:54:19.066754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.487 [2024-07-24 17:54:19.075470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.487 [2024-07-24 17:54:19.076120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.076595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.487 [2024-07-24 17:54:19.076608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.487 [2024-07-24 17:54:19.076618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.487 [2024-07-24 17:54:19.076762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.487 [2024-07-24 17:54:19.076900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.488 [2024-07-24 17:54:19.076910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.488 [2024-07-24 17:54:19.076919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.488 [2024-07-24 17:54:19.078779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.749 [2024-07-24 17:54:19.087375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.749 [2024-07-24 17:54:19.087947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.749 [2024-07-24 17:54:19.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.749 [2024-07-24 17:54:19.088299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.749 [2024-07-24 17:54:19.088310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.749 [2024-07-24 17:54:19.088424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.749 [2024-07-24 17:54:19.088548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.749 [2024-07-24 17:54:19.088558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.749 [2024-07-24 17:54:19.088568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.749 [2024-07-24 17:54:19.090485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.749 [2024-07-24 17:54:19.099333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.749 [2024-07-24 17:54:19.099880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.749 [2024-07-24 17:54:19.100333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.749 [2024-07-24 17:54:19.100346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.749 [2024-07-24 17:54:19.100357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.100486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.100640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.100650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.100660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.102417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.111246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.111792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.112008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.112024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.112035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.112171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.112294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.112304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.112313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.114125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.123383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.124029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.124483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.124496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.124507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.124668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.124776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.124787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.124796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.126647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.135314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.135949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.136365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.136379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.136389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.136491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.136631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.136641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.136652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.138380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.147257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.147861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.148275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.148290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.148304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.148436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.148590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.148600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.148611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.150440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.159195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.159705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.160156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.160170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.160181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.160297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.160405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.160416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.160427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.162289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.171307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.171956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.172667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.172681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.172693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.172854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.172977] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.172987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.172996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.174813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.183266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.183833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.184304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.184317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.184328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.184505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.184582] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.184592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.184602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.186528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 17:54:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:57.750 17:54:19 -- common/autotest_common.sh@852 -- # return 0 00:28:57.750 17:54:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:57.750 17:54:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:57.750 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.750 [2024-07-24 17:54:19.195364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.195978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.196380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.196394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.196405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.750 [2024-07-24 17:54:19.196534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.750 [2024-07-24 17:54:19.196656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.750 [2024-07-24 17:54:19.196666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.750 [2024-07-24 17:54:19.196675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.750 [2024-07-24 17:54:19.198497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.750 [2024-07-24 17:54:19.207657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.750 [2024-07-24 17:54:19.208205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.208655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.750 [2024-07-24 17:54:19.208670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.750 [2024-07-24 17:54:19.208680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.208826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.208934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.208945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.208955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.210761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 [2024-07-24 17:54:19.219723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.220277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.220625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.220638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.220653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.220782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.220950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.220960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.220970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.222762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 17:54:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.751 17:54:19 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.751 17:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.751 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 [2024-07-24 17:54:19.232002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.232274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.232677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.232691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.232702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.232846] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.232938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.232948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.232958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.234811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 [2024-07-24 17:54:19.236950] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.751 17:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.751 17:54:19 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.751 17:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.751 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 [2024-07-24 17:54:19.244058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.244584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.244939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.244952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.244963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.245143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.245237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.245247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.245257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.247109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 [2024-07-24 17:54:19.256080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.256386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.256857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.256871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.256881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.256965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.257094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.257105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.257116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.258976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 [2024-07-24 17:54:19.268141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.268713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.269131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.269146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.269156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.269271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.269443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.269453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.269463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 [2024-07-24 17:54:19.271310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 [2024-07-24 17:54:19.280066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.280614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.281087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.281101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.281112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.281212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.281304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.281315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.281325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 Malloc0 00:28:57.751 17:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.751 [2024-07-24 17:54:19.283159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 17:54:19 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.751 17:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.751 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 [2024-07-24 17:54:19.292099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.292671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.293120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.293134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.751 [2024-07-24 17:54:19.293144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.751 [2024-07-24 17:54:19.293274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.751 [2024-07-24 17:54:19.293397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.751 [2024-07-24 17:54:19.293407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.751 [2024-07-24 17:54:19.293417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.751 17:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.751 17:54:19 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.751 [2024-07-24 17:54:19.295174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.751 17:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.751 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 17:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.751 17:54:19 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.751 17:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.751 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:57.751 [2024-07-24 17:54:19.304007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.751 [2024-07-24 17:54:19.304534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.305008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.751 [2024-07-24 17:54:19.305021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fc900 with addr=10.0.0.2, port=4420 00:28:57.752 [2024-07-24 17:54:19.305032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fc900 is same with the state(5) to be set 00:28:57.752 [2024-07-24 17:54:19.305149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fc900 (9): Bad file descriptor 00:28:57.752 [2024-07-24 17:54:19.305287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.752 [2024-07-24 17:54:19.305297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.752 [2024-07-24 17:54:19.305307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.752 [2024-07-24 17:54:19.306135] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.752 [2024-07-24 17:54:19.307023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.752 17:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.752 17:54:19 -- host/bdevperf.sh@38 -- # wait 777852 00:28:57.752 [2024-07-24 17:54:19.316033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.012 [2024-07-24 17:54:19.386078] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:07.993 00:29:07.993 Latency(us) 00:29:07.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.993 Verification LBA range: start 0x0 length 0x4000 00:29:07.993 Nvme1n1 : 15.00 12129.34 47.38 18367.90 0.00 4185.39 1168.25 25302.59 00:29:07.993 =================================================================================================================== 00:29:07.993 Total : 12129.34 47.38 18367.90 0.00 4185.39 1168.25 25302.59 00:29:07.993 17:54:28 -- host/bdevperf.sh@39 -- # sync 00:29:07.993 17:54:28 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.993 17:54:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:07.993 17:54:28 -- common/autotest_common.sh@10 -- # set +x 00:29:07.993 17:54:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:07.993 17:54:28 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:07.993 17:54:28 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:07.993 17:54:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:07.993 17:54:28 -- nvmf/common.sh@116 -- # sync 00:29:07.993 17:54:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:07.993 17:54:28 -- nvmf/common.sh@119 -- # set +e 00:29:07.993 17:54:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:07.993 17:54:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:07.993 rmmod nvme_tcp 00:29:07.993 rmmod nvme_fabrics 00:29:07.993 rmmod nvme_keyring 00:29:07.993 17:54:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:07.993 17:54:28 -- nvmf/common.sh@123 -- # set -e 00:29:07.993 17:54:28 -- nvmf/common.sh@124 -- # return 0 00:29:07.993 17:54:28 -- nvmf/common.sh@477 -- # '[' -n 778913 ']' 00:29:07.993 17:54:28 -- nvmf/common.sh@478 -- # killprocess 778913 00:29:07.993 17:54:28 -- common/autotest_common.sh@926 -- # '[' -z 778913 ']' 00:29:07.993 17:54:28 -- common/autotest_common.sh@930 -- # kill -0 778913 00:29:07.993 17:54:28 -- common/autotest_common.sh@931 -- # uname 00:29:07.993 17:54:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:07.993 17:54:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 778913 00:29:07.993 17:54:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:07.993 17:54:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:07.993 17:54:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 778913' 00:29:07.993 killing process with pid 778913 00:29:07.993 17:54:28 -- common/autotest_common.sh@945 -- # kill 778913 00:29:07.993 17:54:28 -- common/autotest_common.sh@950 -- # wait 778913 00:29:07.993 17:54:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:07.993 17:54:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:07.993 17:54:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:07.993 17:54:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.994 17:54:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:07.994 17:54:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.994 17:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.994 17:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.934 17:54:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:08.934 00:29:08.934 real 0m26.097s 00:29:08.934 user 1m2.764s 00:29:08.934 sys 0m6.242s 00:29:08.934 17:54:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.934 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:29:08.934 ************************************ 00:29:08.934 END TEST nvmf_bdevperf 00:29:08.934 ************************************ 00:29:08.934 17:54:30 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:08.934 17:54:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:08.934 17:54:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:08.934 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:29:09.194 ************************************ 00:29:09.194 START TEST nvmf_target_disconnect 00:29:09.194 ************************************ 00:29:09.194 17:54:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:09.194 * Looking for test storage... 00:29:09.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.194 17:54:30 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.194 17:54:30 -- nvmf/common.sh@7 -- # uname -s 00:29:09.194 17:54:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.194 17:54:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.194 17:54:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.194 17:54:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.194 17:54:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.194 17:54:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.194 17:54:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.194 17:54:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.194 17:54:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.194 17:54:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.194 17:54:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:09.194 17:54:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:09.194 17:54:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.194 17:54:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.194 17:54:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.194 17:54:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.194 17:54:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.195 17:54:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.195 17:54:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.195 17:54:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 17:54:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 17:54:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 17:54:30 -- paths/export.sh@5 -- # export PATH 00:29:09.195 17:54:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 17:54:30 -- nvmf/common.sh@46 -- # : 0 00:29:09.195 17:54:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:09.195 17:54:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:09.195 17:54:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:09.195 17:54:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.195 17:54:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.195 17:54:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:09.195 17:54:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:09.195 17:54:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:09.195 17:54:30 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:09.195 17:54:30 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:09.195 17:54:30 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:09.195 17:54:30 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:09.195 17:54:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:09.195 17:54:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.195 17:54:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:09.195 17:54:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:09.195 17:54:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:09.195 17:54:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.195 17:54:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.195 17:54:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.195 17:54:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:09.195 17:54:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:09.195 17:54:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:09.195 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:29:14.479 17:54:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:14.479 17:54:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:14.479 17:54:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:14.479 17:54:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:14.479 17:54:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:14.479 17:54:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:14.479 17:54:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:14.479 17:54:35 -- nvmf/common.sh@294 -- # net_devs=() 00:29:14.479 17:54:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:14.479 17:54:35 -- nvmf/common.sh@295 -- # e810=() 00:29:14.479 17:54:35 -- nvmf/common.sh@295 -- # local -ga e810 00:29:14.479 17:54:35 -- nvmf/common.sh@296 -- # x722=() 00:29:14.479 17:54:35 -- nvmf/common.sh@296 -- # local -ga x722 00:29:14.479 17:54:35 -- nvmf/common.sh@297 -- # mlx=() 00:29:14.479 17:54:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:14.479 17:54:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.479 17:54:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:14.479 17:54:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:14.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:14.479 17:54:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:14.479 17:54:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:14.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:14.479 17:54:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:14.479 17:54:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.479 17:54:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.479 17:54:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:14.479 Found net devices under 0000:86:00.0: cvl_0_0 00:29:14.479 17:54:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:14.479 17:54:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.479 17:54:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.479 17:54:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:14.479 Found net devices under 0000:86:00.1: cvl_0_1 00:29:14.479 17:54:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:14.479 17:54:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:14.479 17:54:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.479 17:54:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.479 17:54:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:14.479 17:54:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.479 17:54:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.479 17:54:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:14.479 17:54:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.479 17:54:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.479 17:54:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:14.479 17:54:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:14.479 17:54:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.479 17:54:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.479 17:54:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.479 17:54:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.479 17:54:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:14.479 17:54:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.479 17:54:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.479 17:54:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.479 17:54:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:14.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:29:14.479 00:29:14.479 --- 10.0.0.2 ping statistics --- 00:29:14.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.479 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:14.479 17:54:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:29:14.479 00:29:14.479 --- 10.0.0.1 ping statistics --- 00:29:14.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.479 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:29:14.479 17:54:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.479 17:54:35 -- nvmf/common.sh@410 -- # return 0 00:29:14.479 17:54:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:14.479 17:54:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.479 17:54:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:14.479 17:54:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.479 17:54:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:14.479 17:54:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:14.479 17:54:35 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:14.479 17:54:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.480 17:54:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.480 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:29:14.480 ************************************ 00:29:14.480 START TEST nvmf_target_disconnect_tc1 00:29:14.480 ************************************ 00:29:14.480 17:54:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:14.480 17:54:35 -- host/target_disconnect.sh@32 -- # set +e 00:29:14.480 17:54:35 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.480 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.480 [2024-07-24 17:54:35.570242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-07-24 17:54:35.570736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-07-24 17:54:35.570797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x892610 with addr=10.0.0.2, port=4420 00:29:14.480 [2024-07-24 17:54:35.570858] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:14.480 [2024-07-24 17:54:35.570897] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:14.480 [2024-07-24 17:54:35.570926] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:14.480 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:14.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:14.480 Initializing NVMe Controllers 00:29:14.480 17:54:35 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:14.480 17:54:35 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:14.480 17:54:35 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:14.480 17:54:35 -- common/autotest_common.sh@1132 -- # return 0 00:29:14.480 17:54:35 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:14.480 17:54:35 -- host/target_disconnect.sh@41 -- # set -e 00:29:14.480 00:29:14.480 real 0m0.086s 00:29:14.480 user 0m0.044s 00:29:14.480 sys 0m0.041s 00:29:14.480 17:54:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.480 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:29:14.480 ************************************ 00:29:14.480 END TEST nvmf_target_disconnect_tc1 00:29:14.480 ************************************ 00:29:14.480 17:54:35 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:14.480 17:54:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.480 17:54:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.480 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:29:14.480 ************************************ 00:29:14.480 START TEST nvmf_target_disconnect_tc2 00:29:14.480 ************************************ 00:29:14.480 17:54:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:14.480 17:54:35 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:14.480 17:54:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:14.480 17:54:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:14.480 17:54:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:14.480 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:29:14.480 17:54:35 -- nvmf/common.sh@469 -- # nvmfpid=783880 00:29:14.480 17:54:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:14.480 17:54:35 -- nvmf/common.sh@470 -- # waitforlisten 783880 00:29:14.480 17:54:35 -- common/autotest_common.sh@819 -- # '[' -z 783880 ']' 00:29:14.480 17:54:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.480 17:54:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.480 17:54:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.480 17:54:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.480 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:29:14.480 [2024-07-24 17:54:35.662841] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:14.480 [2024-07-24 17:54:35.662883] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.480 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.480 [2024-07-24 17:54:35.733620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.480 [2024-07-24 17:54:35.808589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:14.480 [2024-07-24 17:54:35.808698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.480 [2024-07-24 17:54:35.808704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.480 [2024-07-24 17:54:35.808710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.480 [2024-07-24 17:54:35.808834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:14.480 [2024-07-24 17:54:35.808941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:14.480 [2024-07-24 17:54:35.809065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.480 [2024-07-24 17:54:35.809066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:15.049 17:54:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.049 17:54:36 -- common/autotest_common.sh@852 -- # return 0 00:29:15.049 17:54:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:15.049 17:54:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 17:54:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.049 17:54:36 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 Malloc0 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 [2024-07-24 17:54:36.500358] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 [2024-07-24 17:54:36.525440] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.049 17:54:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.049 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.049 17:54:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.049 17:54:36 -- host/target_disconnect.sh@50 -- # reconnectpid=783907 00:29:15.049 17:54:36 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:15.049 17:54:36 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.049 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.959 17:54:38 -- host/target_disconnect.sh@53 -- # kill -9 783880 00:29:16.959 17:54:38 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.959 starting I/O failed 00:29:16.959 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 [2024-07-24 17:54:38.551000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 [2024-07-24 17:54:38.551239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Write completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 [2024-07-24 17:54:38.551434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.960 Read completed with error (sct=0, sc=8) 00:29:16.960 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Read completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 Write completed with error (sct=0, sc=8) 00:29:16.961 starting I/O failed 00:29:16.961 [2024-07-24 17:54:38.551620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.961 [2024-07-24 17:54:38.552001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.552539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.552574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:16.961 qpair failed and we were unable to recover it. 00:29:16.961 [2024-07-24 17:54:38.553065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.553505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.553535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:16.961 qpair failed and we were unable to recover it. 00:29:16.961 [2024-07-24 17:54:38.554072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.554587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.554618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:16.961 qpair failed and we were unable to recover it. 00:29:16.961 [2024-07-24 17:54:38.555064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.555518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.555548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:16.961 qpair failed and we were unable to recover it. 00:29:16.961 [2024-07-24 17:54:38.556067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.556516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.961 [2024-07-24 17:54:38.556545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:16.961 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.557100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.557561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.557571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.558060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.558527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.558556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.559013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.559455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.559485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.559993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.560458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.560489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.560996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.561348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.561379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.561772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.562205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.562237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.562719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.563103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.563113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.563578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.564085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.564116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.564545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.565077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.565108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.565567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.566031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.566041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.566439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.566784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.566813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.567261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.567715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.567745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.568279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.568710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.568740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.569187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.569661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.569690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.570199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.570629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.570658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.571210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.571739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.228 [2024-07-24 17:54:38.571768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.228 qpair failed and we were unable to recover it. 00:29:17.228 [2024-07-24 17:54:38.572296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.572775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.572804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.573225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.573729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.573759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.574244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.574731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.574760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.575269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.575717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.575747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.576230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.576755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.576784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.577306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.577759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.577789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.578278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.578812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.578841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.579294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.579774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.579803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.580310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.580787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.580816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.581325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.581833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.581863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.582383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.582861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.582890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.583455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.583968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.583997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.584546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.585062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.585093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.585623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.586127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.586158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.586659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.587128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.587142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.587612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.588137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.588168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.588719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.589221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.589252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.589756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.590268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.590299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.590739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.591272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.591303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.591756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.592267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.592320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.592850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.593351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.593382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.593819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.594301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.594332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.594843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.595289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.595320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.595801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.596349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.596380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.596913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.597387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.597419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.597852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.598338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.598369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.598904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.599373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.599403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.599831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.600351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.229 [2024-07-24 17:54:38.600382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.229 qpair failed and we were unable to recover it. 00:29:17.229 [2024-07-24 17:54:38.600837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.601341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.601373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.601890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.602417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.602449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.603053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.603483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.603513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.604024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.604459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.604490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.604862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.605336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.605369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.605841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.606347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.606382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.606939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.607372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.607403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.607789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.608285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.608317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.608815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.609237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.609268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.609694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.610224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.610257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.610681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.611198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.611230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.611653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.612115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.612148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.612575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.613025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.613078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.613513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.613990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.614019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.614524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.614957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.614986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.615439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.615916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.615946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.616387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.616763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.616792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.617285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.617762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.617792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.618292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.618726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.618755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.619258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.619692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.619721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.620248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.620673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.620702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.621257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.621788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.621817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.622311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.622815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.622844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.623337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.623865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.623894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.624435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.624795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.624824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.625344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.625777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.625807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.626361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.626844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.626877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.627359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.627780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.627809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.230 qpair failed and we were unable to recover it. 00:29:17.230 [2024-07-24 17:54:38.628231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.230 [2024-07-24 17:54:38.628654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.628683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.629186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.629708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.629738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.630259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.630759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.630789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.631201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.631635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.631672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.632202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.632700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.632730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.633222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.633652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.633681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.634132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.634559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.634588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.635106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.635630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.635659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.636181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.636683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.636718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.637173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.637698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.637728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.638235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.638670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.638700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.639147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.639675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.639704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.640233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.640659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.640689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.641129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.641578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.641613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.642149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.642651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.642680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.643119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.643564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.643593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.644130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.644621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.644650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.645155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.645660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.645689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.646177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.646603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.646633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.647185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.647679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.647708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.648185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.648679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.648709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.649134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.649642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.649671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.650151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.650679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.650709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.651191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.651622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.651657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.652068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.652495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.652525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.653083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.653589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.653619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.654161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.654616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.654646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.655134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.655609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.655639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.231 qpair failed and we were unable to recover it. 00:29:17.231 [2024-07-24 17:54:38.656177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.656679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.231 [2024-07-24 17:54:38.656708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.657206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.657730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.657760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.658261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.658697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.658726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.659274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.659768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.659797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.660281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.660767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.660796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.661348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.661778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.661808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.662357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.662725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.662755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.663243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.663777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.663806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.664428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.664875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.664889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.665240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.665742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.665788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.666320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.666829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.666859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.667370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.667853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.667883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.668334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.668784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.668814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.669361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.669913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.669942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.670461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.670910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.670940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.671375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.671901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.671931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.672472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.672985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.673014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.673593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.674027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.674067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.674553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.675088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.675103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.675588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.676095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.676126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.676636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.677091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.677122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.677657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.678158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.678189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.678693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.679120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.679151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.679673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.680228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.680259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.680767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.681286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.681317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.681747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.682274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.682305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.682864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.683382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.683413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.232 qpair failed and we were unable to recover it. 00:29:17.232 [2024-07-24 17:54:38.683945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.232 [2024-07-24 17:54:38.684384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.684415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.684911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.685344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.685374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.685823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.686336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.686366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.686860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.687393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.687424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.687913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.688414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.688445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.688957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.689411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.689441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.690011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.690576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.690607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.691152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.691641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.691670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.692107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.692643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.692672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.693240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.693756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.693785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.694310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.694758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.694787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.695220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.695659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.695688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.696133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.696573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.696603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.697059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.697581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.697611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.698065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.698583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.698613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.699087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.699599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.699628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.699996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.700507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.700522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.700998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.701555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.701587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.702111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.702597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.702627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.703132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.703669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.703698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.704263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.704706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.704737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.705232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.705899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.705928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.706465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.706991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.707022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.233 qpair failed and we were unable to recover it. 00:29:17.233 [2024-07-24 17:54:38.707594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.708091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.233 [2024-07-24 17:54:38.708123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.708680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.709144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.709175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.709645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.710160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.710191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.710734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.711189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.711220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.711718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.712186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.712218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.712660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.713181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.713212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.713684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.714197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.714229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.714712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.715148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.715179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.715684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.716121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.716152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.716595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.717125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.717157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.717615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.718067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.718099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.718523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.719064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.719096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.719549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.720065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.720096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.720633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.721148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.721179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.721685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.722178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.722210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.722759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.723279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.723310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.723836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.724242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.724273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.724767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.725201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.725233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.725786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.726281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.726312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.726824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.727340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.727371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.727856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.728315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.728346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.728790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.729239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.729270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.729766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.730275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.730305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.730853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.731368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.731398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.731936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.732452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.732466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.732976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.733465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.733497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.734057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.734581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.734611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.735161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.735674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.735704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.736226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.736741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.234 [2024-07-24 17:54:38.736771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.234 qpair failed and we were unable to recover it. 00:29:17.234 [2024-07-24 17:54:38.737316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.737857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.737887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.738441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.738967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.738997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.739582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.740019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.740057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.740574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.741085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.741116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.741641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.742176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.742207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.742751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.743265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.743296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.743728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.744267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.744299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.744840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.745287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.745320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.745837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.746358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.746389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.746897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.747328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.747360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.747806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.748259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.748289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.748835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.749375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.749407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.749946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.750408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.750438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.750935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.751457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.751488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.752033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.752594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.752624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.753170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.753688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.753718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.754269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.754798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.754828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.755347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.755862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.755898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.756443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.756958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.756987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.757500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.757940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.757970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.758486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.758968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.758997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.759550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.760068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.760099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.760642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.761132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.761164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.761715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.762204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.762236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.762782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.763292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.763306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.763787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.764222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.764254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.764805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.765263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.765294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.765796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.766265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.235 [2024-07-24 17:54:38.766297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.235 qpair failed and we were unable to recover it. 00:29:17.235 [2024-07-24 17:54:38.766790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.767330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.767361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.767905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.768432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.768463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.769026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.769517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.769548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.770066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.770523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.770553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.771107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.771630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.771660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.772173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.772693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.772723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.773269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.773814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.773844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.774361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.774881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.774910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.775460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.775987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.776017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.776576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.777079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.777124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.777642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.778159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.778191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.778732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.779200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.779231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.779770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.780205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.780237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.780701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.781210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.781241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.781708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.782197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.782229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.782761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.783251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.783281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.783827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.784254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.784286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.784800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.785291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.785322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.785766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.786288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.786319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.786863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.787303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.787340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.787858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.788289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.788320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.788865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.789381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.789412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.789909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.790448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.790480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.790950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.791463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.791494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.792038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.792556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.792586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.793151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.793648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.793677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.794248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.794743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.794773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.795326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.795844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.795874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.236 qpair failed and we were unable to recover it. 00:29:17.236 [2024-07-24 17:54:38.796332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.796831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.236 [2024-07-24 17:54:38.796861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.797310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.797822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.797858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.798378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.798918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.798948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.799513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.800029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.800068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.800615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.801141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.801172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.801723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.802182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.802196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.802595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.803085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.803100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.803599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.804061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.804076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.804557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.805088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.805102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.805617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.806056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.806071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.806531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.806965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.806979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.807445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.807840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.807857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.808327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.808731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.808745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.809210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.809688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.809702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.810180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.810575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.810589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.811083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.811561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.811590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.812132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.812627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.812657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.813098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.813600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.813614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.814083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.814514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.814545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.815079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.815646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.815676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.816202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.816711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.816725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.817139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.817540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.817556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.817954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.818435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.818449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.237 [2024-07-24 17:54:38.818892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.819464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.237 [2024-07-24 17:54:38.819479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.237 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.819998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.820357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.820372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.820792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.821200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.821215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.821621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.822095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.822110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.822524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.822996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.823010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.823523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.823915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.823928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.824388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.824803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.824832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.825272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.825704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.825718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.826204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.826724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.826739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.827248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.827654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.827668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.828147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.828598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.828611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.504 qpair failed and we were unable to recover it. 00:29:17.504 [2024-07-24 17:54:38.829099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.504 [2024-07-24 17:54:38.829595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.829609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.830091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.830601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.830615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.831108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.831518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.831532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.831961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.832422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.832437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.832899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.833328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.833342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.833688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.834148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.834162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.834669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.835156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.835171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.835645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.836171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.836186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.836672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.837189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.837203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.837712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.838132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.838146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.838546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.839006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.839019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.839544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.839974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.839988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.840482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.840911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.840925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.841343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.841840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.841870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.842342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.842775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.842805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.843353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.843838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.843851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.844334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.844773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.844787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.845245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.845733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.845747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.846242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.846750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.846764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.847286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.847742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.847772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.848234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.848724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.848754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.849271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.849729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.849758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.850228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.850739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.850768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.851315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.851836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.851865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.852252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.852768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.852797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.853341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.853778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.853808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.854404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.854980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.855010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.855588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.856107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.505 [2024-07-24 17:54:38.856141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.505 qpair failed and we were unable to recover it. 00:29:17.505 [2024-07-24 17:54:38.856615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.857105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.857120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.857617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.858124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.858138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.858674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.859180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.859211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.859753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.860289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.860320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.860717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.861225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.861256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.861716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.862229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.862260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.862800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.863250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.863265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.863701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.864220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.864251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.864801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.865316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.865347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.865825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.866338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.866369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.866925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.867398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.867430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.868005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.868505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.868536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.869065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.869585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.869616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.870066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.870530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.870560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.871106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.871602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.871632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.872156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.872716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.872746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.873298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.873840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.873871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.874297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.874843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.874873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.875438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.875889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.875919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.876461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.877002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.877031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.877564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.878030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.878072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.878581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.879106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.879137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.879677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.880192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.880223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.880771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.881307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.881338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.881908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.882419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.882451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.882903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.883391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.883422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.883987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.884518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.884549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.885084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.885576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.885606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.506 qpair failed and we were unable to recover it. 00:29:17.506 [2024-07-24 17:54:38.886142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.506 [2024-07-24 17:54:38.886561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.886591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.887135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.887608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.887638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.888148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.888665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.888696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.889238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.889755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.889784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.890227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.890629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.890658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.891169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.891593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.891622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.892142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.892698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.892727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.893239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.893751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.893781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.894305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.894824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.894853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.895422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.895931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.895960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.896526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.897023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.897072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.897625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.898139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.898170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.898677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.899138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.899170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.899688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.900202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.900233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.900706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.901115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.901146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.901602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.902088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.902118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.902668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.903179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.903210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.903707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.904198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.904242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.904790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.905285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.905317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.905872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.906360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.906402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.906919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.907472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.907504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.908027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.908588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.908618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.909112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.909643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.909673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.910213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.910732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.910761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.911313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.911819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.911848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.912371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.912928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.912958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.507 qpair failed and we were unable to recover it. 00:29:17.507 [2024-07-24 17:54:38.913498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.507 [2024-07-24 17:54:38.914015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.914055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.914452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.914887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.914916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.915438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.915953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.915983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.916431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.916951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.916981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.917529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.917914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.917943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.918425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.918933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.918962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.919510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.920017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.920057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.920583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.921106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.921151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.921567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.922077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.922108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.922582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.923038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.923088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.923602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.924117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.924149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.924697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.925208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.925239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.925758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.926258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.926290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.926793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.927279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.927310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.927848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.928286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.928318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.928831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.929271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.929301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.929819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.930359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.930391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.930840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.931379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.931415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.931832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.932371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.932403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.932971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.933488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.933518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.934067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.934582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.934612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.935137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.935659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.935689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.936233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.936765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.936795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.937355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.937804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.937834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.938288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.938806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.938836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.939387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.939825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.939854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.940404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.940927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.940957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.941476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.941992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.942021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.508 qpair failed and we were unable to recover it. 00:29:17.508 [2024-07-24 17:54:38.942547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.508 [2024-07-24 17:54:38.942967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.942997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.943530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.944019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.944060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.944503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.945016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.945055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.945593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.946080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.946111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.946651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.947191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.947222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.947668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.948207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.948238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.948651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.949143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.949174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.949694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.950244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.950276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.950746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.951233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.951270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.951821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.952342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.952372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.952855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.953356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.953388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.953957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.954452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.954484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.955063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.955555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.955585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.956138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.956658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.956688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.957229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.957750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.957780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.958314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.958802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.958832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.959396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.959911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.959941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.960406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.960900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.960930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.961423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.961933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.961969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.962514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.962945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.962975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.963485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.963980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.964010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.964560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.965082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.965113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.509 qpair failed and we were unable to recover it. 00:29:17.509 [2024-07-24 17:54:38.965650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.509 [2024-07-24 17:54:38.966089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.966120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.966601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.967033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.967079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.967621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.968142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.968172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.968665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.969120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.969151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.969694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.970234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.970266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.970720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.971213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.971244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.971796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.972259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.972296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.972803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.973265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.973296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.973834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.974348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.974380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.974879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.975314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.975345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.975897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.976324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.976356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.976823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.977336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.977367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.977838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.978375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.978407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.978821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.979240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.979271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.979802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.980305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.980337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.980850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.981346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.981377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.981885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.982400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.982437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.982981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.983472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.983503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.984034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.984534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.984565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.985147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.985658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.985687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.986255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.986771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.986801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.987267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.987787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.987817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.988364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.988836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.988866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.989311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.989834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.989863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.990464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.990841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.990871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.991377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.991896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.991925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.992468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.992984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.993013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.993597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.994114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.994146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.994689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.995202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.510 [2024-07-24 17:54:38.995234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.510 qpair failed and we were unable to recover it. 00:29:17.510 [2024-07-24 17:54:38.995786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.996215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.996246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:38.996779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.997358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.997389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:38.997920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.998414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.998445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:38.998917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.999398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:38.999429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:38.999991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.000449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.000480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.001017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.001492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.001524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.002031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.002553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.002582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.003096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.003612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.003643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.004092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.004628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.004658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.005224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.005741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.005771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.006281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.006767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.006782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.007300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.007846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.007876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.008323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.008862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.008891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.009429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.009916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.009946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.010484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.010941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.010971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.011499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.012062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.012094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.012615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.013068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.013099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.013569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.014082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.014113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.014682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.015169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.015201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.015712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.016174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.016205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.016746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.017291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.017322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.017866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.018403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.018435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.018952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.019411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.019442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.019984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.020432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.020463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.020902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.021404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.021435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.021977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.022492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.022524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.023020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.023472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.023503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.023996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.024542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.024574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.511 qpair failed and we were unable to recover it. 00:29:17.511 [2024-07-24 17:54:39.025057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.511 [2024-07-24 17:54:39.025567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.025596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.026158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.026658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.026695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.027149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.027694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.027724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.028294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.028813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.028842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.029367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.029918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.029948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.030486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.030977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.031007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.031515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.031954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.031984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.032448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.032977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.033007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.033551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.034014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.034054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.034600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.035091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.035122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.035655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.036189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.036221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.036656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.037117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.037148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.037595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.038124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.038155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.038603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.039070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.039102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.039641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.040062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.040093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.040615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.041137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.041168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.041667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.042210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.042240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.042813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.043326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.043358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.043902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.044335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.044365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.044809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.045261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.045292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.045794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.046282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.046313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.046870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.047317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.047332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.047807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.048332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.048363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.048933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.049380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.049411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.049934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.050472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.050504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.051013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.051507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.051539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.512 qpair failed and we were unable to recover it. 00:29:17.512 [2024-07-24 17:54:39.052007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.052532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.512 [2024-07-24 17:54:39.052564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.053103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.053565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.053579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.054023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.054472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.054501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.055018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.055464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.055495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.056025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.056557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.056587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.057104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.057548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.057577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.058075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.058482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.058496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.058963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.059465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.059497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.059944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.060426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.060441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.060931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.061401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.061432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.061982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.062390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.062405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.062809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.063270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.063284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.063689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.064155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.064169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.064688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.065198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.065229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.065706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.066162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.066193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.066683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.067114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.067128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.067537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.067903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.067917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.068396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.068878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.068891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.069421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.069956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.069971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.070411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.070893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.070907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.071410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.071826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.071840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.072325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.072844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.072859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.073374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.073888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.073902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.074392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.074800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.074814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.075303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.075768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.075799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.076315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.076824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.076838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.077274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.077751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.077764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.078262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.078749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.078778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.079321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.079808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.513 [2024-07-24 17:54:39.079837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.513 qpair failed and we were unable to recover it. 00:29:17.513 [2024-07-24 17:54:39.080283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.080763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.080777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.081200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.081658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.081672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.082145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.082634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.082648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.083066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.083473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.083487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.083908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.084381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.084396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.084836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.085322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.085337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.085742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.086223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.086254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.086713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.087149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.087163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.087644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.088152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.088167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.088604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.089067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.089098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.089547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.090035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.090056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.090473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.090933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.090947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.091410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.091861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.091876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.092353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.092784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.092799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.093283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.093690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.093720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.094178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.094692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.094723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.514 [2024-07-24 17:54:39.095271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.095758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.514 [2024-07-24 17:54:39.095788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.514 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.096357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.096823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.096853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.781 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.097398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.097851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.097880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.781 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.098321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.098812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.098842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.781 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.099385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.099836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.099866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.781 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.100406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.100951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.100981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.781 qpair failed and we were unable to recover it. 00:29:17.781 [2024-07-24 17:54:39.101502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.102033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.781 [2024-07-24 17:54:39.102085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.102596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.103110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.103141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.103661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.104176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.104208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.104750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.105168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.105183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.105694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.106185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.106216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.106759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.107285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.107317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.107858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.108296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.108327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.108824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.109360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.109391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.109817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.110279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.110310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.110847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.111343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.111375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.111843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.112357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.112400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.112933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.113449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.113480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.114064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.114601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.114616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.115027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.115543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.115578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.116146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.116670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.116700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.117242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.117760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.117789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.118257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.118760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.118791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.119328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.119844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.119875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.120345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.120857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.120887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.121428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.121946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.121975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.122536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.122970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.123000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.123524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.123982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.124011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.124514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.124950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.124980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.125482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.125982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.126017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.126577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.127014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.127055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.127526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.128017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.128065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.128525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.129036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.129079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.129638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.130146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.782 [2024-07-24 17:54:39.130177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.782 qpair failed and we were unable to recover it. 00:29:17.782 [2024-07-24 17:54:39.130650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.131141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.131173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.131721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.132239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.132271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.132818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.133334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.133366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.133841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.134353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.134384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.134924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.135435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.135466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.135937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.136452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.136490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.136983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.137511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.137543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.138012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.138540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.138572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.139129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.139593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.139623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.140199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.140700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.140730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.141301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.141789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.141819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.142341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.142773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.142802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.143331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.143778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.143807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.144340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.144857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.144886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.145430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.145922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.145952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.146489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.147000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.147030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.147618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.148146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.148178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.148612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.149055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.149086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.149517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.149881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.149910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.150403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.150917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.150953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.151465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.151963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.151992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.152519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.153020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.153060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.153622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.154111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.154143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.154665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.155191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.155222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.155685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.156175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.156207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.156681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.157120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.157152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.783 qpair failed and we were unable to recover it. 00:29:17.783 [2024-07-24 17:54:39.157698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.783 [2024-07-24 17:54:39.158216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.158248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.158743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.159271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.159302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.159752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.160246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.160277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.160816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.161334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.161364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.161899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.162335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.162350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.162820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.163304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.163318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.163824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.164401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.164432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.164935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.165450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.165481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.165938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.166405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.166436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.166891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.167425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.167456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.168115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.168628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.168643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.169119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.169634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.169664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.170198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.170715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.170745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.171201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.171711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.171740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.172284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.172797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.172828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.173395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.173888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.173917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.174437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.174874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.174903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.175413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.175929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.175958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.176506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.176959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.176988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.177490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.177944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.177973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.178428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.178940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.178983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.179450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.179951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.179980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.180531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.181063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.181094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.181564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.181999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.182028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.182581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.183073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.183104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.183654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.184167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.184199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.184668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.185110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.784 [2024-07-24 17:54:39.185141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-07-24 17:54:39.185684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.186247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.186277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.186780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.187255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.187286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.187752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.188271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.188301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.188840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.189333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.189364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.189892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.190430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.190462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.191003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.191517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.191548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.192123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.192660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.192689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.193253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.193756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.193786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.194362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.194815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.194844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.195287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.195800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.195829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.196331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.196825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.196854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.197386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.197906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.197935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.198480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.198991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.199021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.199518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.200058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.200089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.200646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.201078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.201110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.201658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.202122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.202153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.202666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.203182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.203214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.203760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.204273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.204288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.204747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.205235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.205267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.205725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.206233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.206247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.206719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.207258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.207289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.207832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.208293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.208325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.208863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.209374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.209405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.209991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.210548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.210579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.211141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.211634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.211664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.212222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.212739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.212769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-07-24 17:54:39.213235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.785 [2024-07-24 17:54:39.213746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.213775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.214321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.214830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.214861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.215306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.215771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.215801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.216300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.216754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.216784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.217299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.217742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.217771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.218237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.218767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.218797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.219334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.219825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.219855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.220310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.220851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.220881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.221374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.221886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.221915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.222465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.222983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.223013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.223518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.224080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.224111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.224638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.225147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.225178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.225696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.226241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.226272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.226851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.227369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.227401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.227833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.228309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.228341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.228855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.229345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.229378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.229970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.230487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.230518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.231063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.231577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.231608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.232170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.232588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.232617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.233073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.233601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.233631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.234167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.234696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.234725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.786 qpair failed and we were unable to recover it. 00:29:17.786 [2024-07-24 17:54:39.235259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.786 [2024-07-24 17:54:39.235752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.235782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.236230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.236765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.236795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.237351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.237869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.237899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.238443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.238889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.238919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.239463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.239974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.240004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.240508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.240999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.241029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.241599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.242035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.242077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.242619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.243168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.243200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.243671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.244210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.244241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.244781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.245219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.245250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.245701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.246173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.246204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.246687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.247170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.247202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.247749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.248220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.248251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.248695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.249159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.249190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.249710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.250239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.250271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.250749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.251240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.251281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.251775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.252160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.252191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.252685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.253216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.253248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.253719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.254235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.254267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.254846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.255335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.255350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.255885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.256438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.256469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.256957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.257472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.257502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.257958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.258402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.258433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.258949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.259412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.259444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.259976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.260544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.260575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.261098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.261554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.261583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.262105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.262589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.262618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.787 qpair failed and we were unable to recover it. 00:29:17.787 [2024-07-24 17:54:39.263140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.787 [2024-07-24 17:54:39.263663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.263692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.264202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.264689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.264704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.265236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.265685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.265715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.266257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.266759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.266789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.267311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.267829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.267859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.268295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.268774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.268803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.269243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.269927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.269957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.270492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.270990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.271029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.271531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.272022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.272073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.272533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.273067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.273105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.273584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.274113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.274145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.274703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.275228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.275260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.275706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.276260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.276291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.276749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.277259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.277291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.277765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.278202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.278216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.278626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.279133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.279149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.279519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.279959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.279989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.280454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.280907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.280937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.281446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.281883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.281913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.282436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.282999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.283052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.283477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.283986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.284016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.284507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.285068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.285099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.285641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.286182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.286213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.286769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.287325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.287356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.287880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.288370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.288402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.288951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.289439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.289471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.788 [2024-07-24 17:54:39.290009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.290548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.788 [2024-07-24 17:54:39.290580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.788 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.291096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.291539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.291569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.292114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.292658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.292688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.293217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.293709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.293745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.294215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.294682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.294713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.295209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.295666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.295681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.296147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.296636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.296667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.297216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.297710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.297740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.298302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.298814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.298844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.299409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.299850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.299880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.300434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.300947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.300978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.301496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.301936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.301967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.302484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.302906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.302921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.303384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.303798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.303816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.304229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.304689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.304703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.305111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.305486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.305516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.305920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.306433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.306448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.306854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.307262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.307278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.307714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.308173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.308189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.308680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.309174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.309206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.309656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.310197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.310211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.310619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.311107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.311139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.311566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.312054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.312069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.312513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.312989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.313003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.313421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.313827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.313858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.314351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.314807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.314837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.315075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.315539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.315568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.316016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.316544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.316575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.317068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.317506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.317535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.789 qpair failed and we were unable to recover it. 00:29:17.789 [2024-07-24 17:54:39.318029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.789 [2024-07-24 17:54:39.318428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.318458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.318888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.319278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.319292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.319725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.320188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.320220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.320686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.321170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.321201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.321698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.322126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.322157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.322610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.323036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.323057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.323550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.324082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.324113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.324591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.325066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.325081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.325508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.325863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.325877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.326365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.326828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.326857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.327139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.327602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.327616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.328084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.328492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.328521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.328885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.329315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.329330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.329570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.329959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.329973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.330435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.330838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.330867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.331346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.331846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.331860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.332278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.332748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.332761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.333010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.333355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.333370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.333872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.334372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.334387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.334791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.335269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.335283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.335755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.336258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.336273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.336734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.337212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.337227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.337624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.338155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.338186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.338723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.339259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.339290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.339774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.340285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.340316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.340870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.341379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.341412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.341968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.342462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.342493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.343011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.343536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.343567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.790 [2024-07-24 17:54:39.344134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.344644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.790 [2024-07-24 17:54:39.344674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.790 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.345235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.345740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.345770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.346319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.346828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.346857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.347429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.347928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.347958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.348507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.349024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.349065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.349624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.350111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.350142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.350616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.351123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.351154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.351697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.352153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.352184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.352731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.353246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.353277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.353744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.354263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.354295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.354852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.355308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.355339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.355735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.356196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.356228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.356779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.357291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.357322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.357855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.358389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.358420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.358950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.359505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.359536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.360079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.360524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.360554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.361009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.361510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.361542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.362026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.362557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.362571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.363053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.363424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.363438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.363785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.364323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.364359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.364860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.365360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.365392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.365897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.366416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.366431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.366851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.367326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.367357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.367824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.368409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.368441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.791 qpair failed and we were unable to recover it. 00:29:17.791 [2024-07-24 17:54:39.368897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.791 [2024-07-24 17:54:39.369391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.369427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.792 qpair failed and we were unable to recover it. 00:29:17.792 [2024-07-24 17:54:39.369843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.370330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.370361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.792 qpair failed and we were unable to recover it. 00:29:17.792 [2024-07-24 17:54:39.370753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.371208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.371238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.792 qpair failed and we were unable to recover it. 00:29:17.792 [2024-07-24 17:54:39.371746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.372285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.372316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.792 qpair failed and we were unable to recover it. 00:29:17.792 [2024-07-24 17:54:39.372817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.373297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.792 [2024-07-24 17:54:39.373328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:17.792 qpair failed and we were unable to recover it. 00:29:18.058 [2024-07-24 17:54:39.373869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.374316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.374347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-07-24 17:54:39.374748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.375243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.375285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-07-24 17:54:39.375728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.376247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.376282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-07-24 17:54:39.376820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.377354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-07-24 17:54:39.377385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-07-24 17:54:39.377941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.378434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.378465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.379006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.379434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.379465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.379990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.380520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.380551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.380976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.381691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.381777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.382018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050200 is same with the state(5) to be set 00:29:18.059 [2024-07-24 17:54:39.382619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.383109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.383151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.383688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.384149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.384181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.384683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.385222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.385254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.385835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.386278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.386313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.386823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.387367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.387399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.387896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.388409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.388443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.388900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.389428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.389460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.389908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.390579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.390609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.391143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.391544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.391555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.391973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.392348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.392380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.392833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.393229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.393241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.393711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.394265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.394295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.394834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.395316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.395350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.395831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.396381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.396414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.396872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.397390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.397420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.397861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.398313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.398344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.398745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.399254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.399290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.399792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.400305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.400339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.400782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.401311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.401343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.401824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.402260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.402291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.402749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.403281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.403325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.403811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.404326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.404359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.404842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.405354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.405384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-07-24 17:54:39.405794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.406303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-07-24 17:54:39.406334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.406781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.407448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.407480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.408025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.408528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.408559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.409074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.409615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.409644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.410089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.410490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.410519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.411030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.411573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.411604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.412130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.412567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.412596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.413065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.413563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.413593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.414122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.414649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.414678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.415150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.415639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.415668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.416137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.416643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.416673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.417135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.417622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.417652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.418209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.418657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.418686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.419206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.419694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.419723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.420266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.420811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.420840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.421380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.421819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.421849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.422375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.422905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.422934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.423449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.423948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.423978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.424528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.424956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.424965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.425427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.425991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.426020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.426580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.427085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.427117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.427520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.428034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.428073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.428544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.429063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.429094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.429607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.430128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.430159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.430703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.431218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.431249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.431691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.432179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.432210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.432753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.433241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.433271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.433810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.434288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.434319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.434758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.435265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-07-24 17:54:39.435296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-07-24 17:54:39.435791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.436305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.436335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.436900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.437392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.437425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.437891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.438270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.438300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.438824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.439341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.439372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.439815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.440257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.440287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.440671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.441111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.441141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.441602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.442075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.442107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.442626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.442939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.442968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.443449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.443901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.443931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.444359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.444798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.444827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.445346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.445858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.445887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.446356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.446813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.446842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.447375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.447910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.447939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.448475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.449016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.449052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.449546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.449991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.450019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.450498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.450987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.451016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.451370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.451887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.451916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.452368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.452857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.452886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.453336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.453784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.453814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.454281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.454707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.454736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.455163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.455590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.455619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.456084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.456310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.456339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.456865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.457322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.457352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.457791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.458245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.458254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.458679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.459124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.459154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.459669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.460109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.460140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.460581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.461015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.461051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.461594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.462012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-07-24 17:54:39.462049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-07-24 17:54:39.462542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.463031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.463078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.463519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.464036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.464077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.464543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.464972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.465001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.465460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.465947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.465977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.466429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.466912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.466942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.467393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.467813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.467857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.468317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.468767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.468796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.469290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.469799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.469828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.470340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.470774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.470803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.471247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.471736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.471775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.472201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.472653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.472683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.473126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.473380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.473409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.473860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.474349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.474380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.474873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.475378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.475408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.475927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.476371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.476401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.476830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.477336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.477366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.477825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.478323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.478354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.478753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.479171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.479201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.479691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.480171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.480201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.480688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.481185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.481214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.481729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.482412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.482448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.483034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.483380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.483391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.483812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.484195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.484209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.484484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.484827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.484837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.485170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.485578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.485608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.486121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.486623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.486655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.487162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.487606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.487635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.488017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.488491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.488523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.062 qpair failed and we were unable to recover it. 00:29:18.062 [2024-07-24 17:54:39.488981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.062 [2024-07-24 17:54:39.489414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.489445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.489863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.490283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.490313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.490741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.491240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.491253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.491579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.491980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.492009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.492438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.492940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.492970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.493423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.493855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.493884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.494395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.494621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.494650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.495037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.495492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.495523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.496011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.496480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.496511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.497010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.497449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.497479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.497912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.498272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.498302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.498825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.499305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.499336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.499823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.500248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.500285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.500735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.500959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.500968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.501298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.501759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.501768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.502369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.502667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.502696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.502888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.503390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.503420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.503952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.504455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.504485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.504867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.505281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.505311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.505732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.506172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.506182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.506616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.507076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.507107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.507611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.508687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.508705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.509134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.509547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.509560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.063 qpair failed and we were unable to recover it. 00:29:18.063 [2024-07-24 17:54:39.509944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.510384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.063 [2024-07-24 17:54:39.510394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.510736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.511182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.511214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.511648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.512162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.512192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.512715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.513235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.513264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.513632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.513995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.514024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.514414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.514830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.514859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.515242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.515657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.515686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.516117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.516547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.516576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.516947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.517427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.517457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.517939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.518146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.518158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.518658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.519138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.519168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.519540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.519968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.519996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.520442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.520869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.520899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.521157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.521486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.521515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.522022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.522495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.522525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.522952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.523311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.523342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.523721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.524148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.524178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.524709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.524885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.524913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.525347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.525759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.525788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.526242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.526631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.526659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.527198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.527679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.527708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.528172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.528569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.528601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.529030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.529488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.529518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.529765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.530248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.530277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.530664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.531029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.531079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.531512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.531937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.531966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.532332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.532687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.532716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.533150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.533649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.533678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.534033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.534409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.064 [2024-07-24 17:54:39.534440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-07-24 17:54:39.534982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.535414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.535443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.535876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.536491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.536522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.537077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.537442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.537471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.537977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.538421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.538451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.538981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.539372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.539403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.539916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.540277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.540307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.540912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.541152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.541181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.541665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.542088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.542118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.542542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.542898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.542926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.543368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.543783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.543811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.544225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.544686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.544714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.545186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.545556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.545585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.546014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.546468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.546498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.546912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.547332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.547362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.547786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.548214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.548244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.548748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.549188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.549218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.549660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.550015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.550050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.550481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.550721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.550750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.551182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.551617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.551646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.552081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.552433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.552462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.552901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.553298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.553328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.553770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.554177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.554188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.554676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.555011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.555021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.555461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.555815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.555843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.556324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.556688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.556719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.557215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.557675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.557685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.558079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.558518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.558527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.558944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.559326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.559336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-07-24 17:54:39.559732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.560145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.065 [2024-07-24 17:54:39.560155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.560569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.560953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.560963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.561374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.561749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.561758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.562109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.562527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.562536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.562919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.563302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.563311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.563769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.564140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.564150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.564560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.564876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.564885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.565304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.565536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.565545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.565935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.566460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.566470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.566886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.567205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.567214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.567694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.568007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.568016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.568420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.568810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.568820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.569216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.569699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.569708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.569955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.570332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.570342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.570656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.571112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.571122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.571510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.571893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.571902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.572303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.572737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.572747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.573127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.573445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.573454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.573763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.574080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.574089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.574500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.574889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.574899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.575290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.575684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.575693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.576090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.576566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.576595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.577098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.577484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.577493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.577833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.578218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.578227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.578563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.578950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.578959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.579578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.579973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.580002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.580458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.580884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.580893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.581287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.581613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.581622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.581996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.582372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.066 [2024-07-24 17:54:39.582382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.066 qpair failed and we were unable to recover it. 00:29:18.066 [2024-07-24 17:54:39.582711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.583100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.583109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.583512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.583948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.583957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.584407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.584807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.584816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.585155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.585470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.585480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.585792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.586250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.586260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.586722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.587158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.587168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.587587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.587967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.587977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.588361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.588762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.588772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.589250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.589666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.589676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.590088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.590417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.590427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.590811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.591276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.591286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.591671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.592068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.592078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.592448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.592828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.592838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.593214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.593599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.593609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.593926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.594263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.594273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.594709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.595091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.595100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.595560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.595947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.595956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.596305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.596677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.596687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.597064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.597526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.597535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.597940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.598355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.598365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.598804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.599237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.599247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.599689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.600141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.600151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.600537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.601010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.601020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.601359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.601817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.601826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.602316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.602779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.602789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.603176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.603612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.603621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.603989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.604332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.604342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.604762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.605149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.605159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.067 qpair failed and we were unable to recover it. 00:29:18.067 [2024-07-24 17:54:39.605597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.605887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.067 [2024-07-24 17:54:39.605897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.606354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.606683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.606693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.607144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.607605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.607614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.608003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.608412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.608422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.608874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.609262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.609272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.609679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.610073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.610083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.610472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.610673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.610683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.611168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.611554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.611564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.611761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.612144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.612154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.612466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.612922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.612931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.613371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.613754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.613763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.614219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.614675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.614684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.615121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.615587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.615597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.615982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.616370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.616380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.616816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.617296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.617306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.617689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.618011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.618021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.618485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.618665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.618675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.619107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.619545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.619554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.619879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.620363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.620375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.620817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.621194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.621204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.621663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.621982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.621991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.622391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.622762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.622772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.623143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.623484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.623494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.623932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.624318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.624329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.624781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.625236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.068 [2024-07-24 17:54:39.625248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.068 qpair failed and we were unable to recover it. 00:29:18.068 [2024-07-24 17:54:39.625570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.625907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.625917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.626297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.626751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.626763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.627155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.627504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.627513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.627998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.628413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.628423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.628805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.629239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.629249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.629708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.630145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.630156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.630477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.630814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.630823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.631175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.631600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.631610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.631944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.632329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.632343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.632731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.633124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.633134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.633519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.633844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.633854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.634235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.634714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.634725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.635100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.635510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.635519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.635907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.636289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.636299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.636700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.637039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.637061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.637549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.637869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.637879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.638298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.638696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.638706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.639113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.639419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.639429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.639802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.640269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.640280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.640659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.640975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.640985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.641368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.641742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.641751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.642186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.642501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.642513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.642987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.643364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.643374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.643745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.643942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.643951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.644365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.644689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.644698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.645003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.645381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.645391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.645774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.646159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.646189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.646690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.647128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.647154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.069 qpair failed and we were unable to recover it. 00:29:18.069 [2024-07-24 17:54:39.647536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.069 [2024-07-24 17:54:39.647967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.070 [2024-07-24 17:54:39.647996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.070 qpair failed and we were unable to recover it. 00:29:18.070 [2024-07-24 17:54:39.648428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.070 [2024-07-24 17:54:39.648803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.070 [2024-07-24 17:54:39.648812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.070 qpair failed and we were unable to recover it. 00:29:18.070 [2024-07-24 17:54:39.649212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.070 [2024-07-24 17:54:39.649590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.070 [2024-07-24 17:54:39.649600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.070 qpair failed and we were unable to recover it. 00:29:18.070 [2024-07-24 17:54:39.649911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.650300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.650332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.650702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.652562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.652584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.653001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.653518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.653548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.653920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.654346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.654376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.654800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.655234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.655264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.655690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.656027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.656079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.656513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.656949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.656979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.657406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.657844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.657874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.658294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.658722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.658751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.659206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.659561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.659590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.659954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.660428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.660458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.660973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.661342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.661372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.661790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.662226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.662256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.662810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.663230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.663260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.663461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.663902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.663930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.664305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.664722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.664751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.665138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.665554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.665582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-07-24 17:54:39.666251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.667879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-07-24 17:54:39.667899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.668378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.668956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.668977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.669379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.669799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.669808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.670027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.670360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.670370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.670810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.671124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.671134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.671458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.671791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.671801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.672226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.672638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.672647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.672962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.673399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.673409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.673799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.674119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.674128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.674468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.674759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.674768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.675206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.675655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.675665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.675992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.676320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.676329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.676743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.677161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.677171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.677498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.677900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.677909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.678236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.678621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.678630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.679064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.679439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.679448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.679779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.680159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.680169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.680571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.680884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.680893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.681270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.681645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.681654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.682023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.682396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.682405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.682792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.683180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.683190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.683493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.683833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.683842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.684217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.684628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.684638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.685055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.685522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.685532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.685862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.686268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.686278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.686587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.687032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.687041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.687426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.687886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.687895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.688223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.688607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.688616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-07-24 17:54:39.689013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.689419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-07-24 17:54:39.689429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.689871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.690280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.690289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.690671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.691051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.691061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.691447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.691924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.691934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.692373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.692746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.692755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.693216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.693674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.693683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.694088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.694440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.694449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.694816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.695305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.695314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.695751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.696205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.696214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.696603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.696981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.696990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.697132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.697568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.697578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.697973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.698212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.698221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.698603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.699000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.699010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.699384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.699752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.699762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.700164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.700569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.700579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.701018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.701478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.701488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.701876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.702318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.702328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.702715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.703164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.703174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.703560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.703891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.703902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.704370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.704787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.704797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.705240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.705710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.705720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.706158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.706538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.706548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.707003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.707466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.707476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.707934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.708318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.708329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.708791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.709163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.709174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.709377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.709832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.709842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.710282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.710636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.710646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.710899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.711334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.711344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.711675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.712130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.712140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-07-24 17:54:39.712601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-07-24 17:54:39.712986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.712996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.713379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.713789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.713799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.714245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.714711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.714720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.715105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.715515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.715524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.715967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.716379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.716390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.716798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.717132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.717142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.717509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.717983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.717992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.718427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.718799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.718808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.719000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.719332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.719342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.719805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.720267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.720277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.720678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.721065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.721075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.721510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.721916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.721927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.722387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.722820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.722829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.723289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.723667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.723677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.724113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.724501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.724511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.724913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.725072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.725082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.725470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.725854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.725864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.726331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.726767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.726777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.727170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.727595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.727605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.728050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.728510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.728520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.728955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.729363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.729373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.729838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.730298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.730308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.730745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.731183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.731193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.731655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.731904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.731913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.732298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.732671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.732680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.733082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.733493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.733502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.733960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.734355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.734366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.734828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.735288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.735299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-07-24 17:54:39.735640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.736120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-07-24 17:54:39.736130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.736556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.736962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.736971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.737372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.737815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.737824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.738105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.738564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.738574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.738972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.739172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.739183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.739562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.739898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.739908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.740361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.740820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.740830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.741210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.741687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.741697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.742088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.742546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.742556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.742946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.743412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.743423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.743833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.744292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.744303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.744607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.744990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.745000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.745405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.745904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.745913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.746238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.746686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.746696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.746899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.747211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.747222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.747657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.748114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.748124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.748507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.748891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.748901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.749341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.749783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.749792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.750259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.750669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.750698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.751126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.751537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.751577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.751984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.752262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.752272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.752734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.753152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.753163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.753496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.753970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.753979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.754390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.754761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.754770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.755179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.755653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.755662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-07-24 17:54:39.756061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.756457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-07-24 17:54:39.756466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.756877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.757337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.757347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.757716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.758172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.758182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.758646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.759026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.759035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.759505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.759890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.759901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.760217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.760688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.760697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.761060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.761522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.761532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.762005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.762438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.762449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.762837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.763291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.763301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.763702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.764097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.764108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.764421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.764891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.764900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.765277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.765751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.765761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.766150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.766622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.766631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.767034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.767418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.767429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.767923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.768311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.768324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.768779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.769213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.769224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.769669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.770098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.770108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.770549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.771008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.771018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.771455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.771889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.771899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.772268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.772731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.772742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.773136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.773536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.773546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.773924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.774337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.774347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.774735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.775110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.775120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.775255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.775588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.775597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.776033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.776472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.776483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.776944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.777324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.777335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-07-24 17:54:39.777716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.778184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-07-24 17:54:39.778194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.778619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.778936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.778946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.779397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.779805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.779816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.780213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.780603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.780612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.781053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.781248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.781258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.781442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.781878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.781887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.782225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.782605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.782615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.783073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.783461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.783471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.783933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.784413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.784422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.784873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.785327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.785337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.785773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.786196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.786218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.786618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.787052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.787062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.787397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.787856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.787866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.788336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.788746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.788757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.789196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.789652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.789662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.790050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.790488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.790498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.790941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.791326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.791337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.791773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.792153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.792164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.792554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.792964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.792973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.793366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.793805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.793814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.794235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.794644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.794654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.795053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.795441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.795451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.795913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.796359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.796369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.796807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.797194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.797204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.797676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.798025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.798035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.798443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.798938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.798966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.799366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.799786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.799814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.800262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.800760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.800789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-07-24 17:54:39.801219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.801737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-07-24 17:54:39.801765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.802220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.802706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.802735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.803115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.803487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.803496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.803957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.804348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.804358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.804756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.805155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.805165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.805641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.806033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.806046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.806391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.806770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.806780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.807254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.807628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.807637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.807944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.808297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.808307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.808686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.809121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.809131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.809510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.809887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.809897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.810366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.810745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.810755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.811139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.811596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.811605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.812010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.812338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.812347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.812739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.813115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.813125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.813559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.814030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.814039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.814509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.814724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.814733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.815118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.815554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.815564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.815892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.816330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.816340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.816667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.817067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.817076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.817423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.817764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.817774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.818185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.818576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.818586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.818960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.819279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.819289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.819683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.820139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.820149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.820521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.820845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.820855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.821319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.821780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.821790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.822249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.822686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.822695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.823074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.823451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.823460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.823843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.824286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.824296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-07-24 17:54:39.824686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-07-24 17:54:39.825073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.825083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.825520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.825844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.825854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.826247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.826666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.826675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.826813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.827249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.827258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.827575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.827889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.827900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.828297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.828699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.828708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.829142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.829544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.829555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.830017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.830406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.830416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.830808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.831128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.831139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.831597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.831925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.831935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.832255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.832633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.832643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.833107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.833477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.833487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.833961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.834279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.834289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.834677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.835062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.835072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.835391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.835837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.835847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.836317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.836781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.836791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.837195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.837601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.837611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.837995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.838191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.838202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.838605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.838974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.838985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.839449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.839887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.839897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.840300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.840644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.840655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.841301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.841740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.841769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.842459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.842908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.842938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.843377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.843785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.843794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.844206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.844616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.844636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.845050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.845434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.845444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.845830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.846296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.846317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.846706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.847097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.847108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-07-24 17:54:39.847492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.847877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-07-24 17:54:39.847886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.848327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.848724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.848734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.849062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.849485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.849495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.849877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.850164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.850174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.850560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.850941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.850950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.851404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.851788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.851797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.852170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.852580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.852590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.852972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.853310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.853321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.853712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.854194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.854205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.854610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.854960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.854970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.855361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.855712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.855721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.856178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.856569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.856579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.857017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.857418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.857428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.857866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.858249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.858259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.858640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.859101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.859111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.859491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.859932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.859942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.860261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.860587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.860596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.860979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.861362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.861372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.861831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.862276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.862287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.862723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.863156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.863166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.863535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.863939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.863949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.864265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.864639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.864649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.865076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.865561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.865571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.866007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.866410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.866419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-07-24 17:54:39.866814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.867189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-07-24 17:54:39.867199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.867644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.868036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.868054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.868498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.868945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.868954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.869251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.869747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.869757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.870142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.870602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.870611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.871006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.871464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.871474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.871936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.872315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.872325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.872714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.873113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.873123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.873595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.873923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.873933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.874314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.874695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.874704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.875192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.875579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.875589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.875808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.876327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.876337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.876784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.877169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.877199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.877718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.878123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.878133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.878519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.878993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.879002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.879463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.879847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.879856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.880241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.880632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.880642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.881022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.881418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.881448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.881932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.882429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.882458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.882940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.883414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.883444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.883813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.884226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.884261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.884676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.884998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.885027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.885394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.885630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.885658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.886139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.886499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.886528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.886929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.887402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.887432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.887941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.888360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.888389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.888892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.889366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.889396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.889898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.890316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.890345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.890827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.891248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-07-24 17:54:39.891279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-07-24 17:54:39.891758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.892179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.892209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.892655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.893129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.893165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.893583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.893907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.893936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.894441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.894688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.894697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.895061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.895435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.895464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.895911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.896347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.896379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.896807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.897263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.897293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.897718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.898162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.898192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.898566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.898935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.898964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.899444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.899935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.899965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.900395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.900872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.900901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.901259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.901587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.901621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.902063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.902491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.902520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.902978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.903305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.903335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.903786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.904214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.904244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.904680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.905154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.905184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.905622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.906058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.906088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.906566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.907081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.907110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.907530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.908028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.908069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.908523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.908974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.908982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.909306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.909771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.909800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.910229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.910640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.910679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.910921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.911272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.911301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.911722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.912218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.912247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.912667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.913088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.913118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.913623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.914033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.914070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.914489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.914910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.914918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.915305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.915767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.915796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-07-24 17:54:39.916275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-07-24 17:54:39.916748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.916777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.917197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.917581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.917610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.918065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.918505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.918533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.918989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.919462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.919492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.919849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.920104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.920113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.920528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.920955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.920995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.921462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.921901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.921910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.922410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.922886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.922915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.923467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.923906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.923935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.924281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.924800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.924809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.925019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.925342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.925372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.925786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.926241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.926271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-07-24 17:54:39.926692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.927177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-07-24 17:54:39.927207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.927631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.928137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.928167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.928550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.928973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.929002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.929448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.929923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.929952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.930429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.930794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.930823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.931279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.931685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.931713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.932215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.932652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.932680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.933159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.933582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.933611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.934066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.934544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.934573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.934993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.935505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.935535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.935957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.936382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.936411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.936850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.937338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.937367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.937829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.938305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.938335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.938780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.939252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.939281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.939791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.940155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.940185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.940602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.941077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.941108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.941587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.942087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.942118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.942557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.942991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.943019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.617 qpair failed and we were unable to recover it. 00:29:18.617 [2024-07-24 17:54:39.943302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.943724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.617 [2024-07-24 17:54:39.943753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.944256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.944696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.944725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.945182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.945725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.945754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.946182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.946621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.946651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.947081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.947556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.947586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.948029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.948456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.948485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.948984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.949470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.949499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.949946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.950429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.950459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.950980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.951399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.951429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.951858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.952308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.952338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.952767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.953091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.953121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.953544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.953970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.953999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.954488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.954921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.954950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.955361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.955862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.955890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.956355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.956817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.956846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.957286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.957784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.957813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.958232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.958729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.958757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.959298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.959749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.959777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.960204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.960697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.960726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.961182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.961608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.961636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.962114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.962611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.962640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.963133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.963523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.963552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.963974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.964458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.964487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.964855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.965271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.965281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.965727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.966117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.966147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.966573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.966938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.966966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.967284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.967525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.967554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.967985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.968402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.968431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.618 [2024-07-24 17:54:39.968916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.969332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.618 [2024-07-24 17:54:39.969362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.618 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.969721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.969944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.969953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.970417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.970816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.970844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.971284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.971682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.971711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.972089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.972432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.972461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.972908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.973403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.973433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.973863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.974385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.974416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.974791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.975205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.975235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.975715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.976191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.976220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.976717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.977135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.977144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.977595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.977759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.977788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.978234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.978624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.978633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.979082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.979577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.979606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.980114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.980536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.980565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.981022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.981523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.981553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.982060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.982479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.982507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.982700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.983193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.983224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.983703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.984154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.984184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.984680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.985055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.985094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.985535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.985959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.985988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.986498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.986857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.986885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.987315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.987817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.987852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.988292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.988734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.988762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.989278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.989732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.989761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.990210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.990653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.990681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.991111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.991590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.991618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.992124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.992496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.992526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.993003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.993435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.993464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.993943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.994383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.619 [2024-07-24 17:54:39.994413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.619 qpair failed and we were unable to recover it. 00:29:18.619 [2024-07-24 17:54:39.994895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.995343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.995373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.995641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.996054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.996084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.996587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.997017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.997062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.997489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.997855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.997884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.998362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.998791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.998820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.999071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.999516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:39.999545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:39.999980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.000398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.000428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.000905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.001235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.001266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.001778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.002208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.002217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.002608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.002920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.002929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.003367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.003815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.003825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.004250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.004635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.004645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.005033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.005437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.005447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.006063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.006430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.006440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.006765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.007146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.007156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.007473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.007900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.007909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.008290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.008709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.008719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.009126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.009518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.009529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.009836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.010418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.010441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.010924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.011359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.011394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.011845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.012362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.012410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.012920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.013424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.013484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.014051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.014634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.014660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6fc000b90 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.015290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.015892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.015913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.016316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.016729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.016744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.017093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.017493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.017507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.017811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.018147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.018163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.018671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.019118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.019139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.620 qpair failed and we were unable to recover it. 00:29:18.620 [2024-07-24 17:54:40.019384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.019827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.620 [2024-07-24 17:54:40.019840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.020289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.020741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.020755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.021158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.021627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.021640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.021960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.022176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.022191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.022402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.022844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.022858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.023265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.023470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.023483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.023935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.024268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.024285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.024626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.025089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.025104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.025402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.025780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.025793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.026276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.026676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.026693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.027106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.027496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.027509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.027931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.028261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.028276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.028691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.029156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.029171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.029557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.029956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.029969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.030369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.030811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.030825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.031144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.031588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.031602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.031999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.032466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.032480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.032898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.033341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.033355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.033801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.034132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.034147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.034589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.034900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.034914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.035347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.035818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.035832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.036286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.036728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.036742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.037208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.037547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.037561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.037963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.038340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.038355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.038810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.039273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.039288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.621 qpair failed and we were unable to recover it. 00:29:18.621 [2024-07-24 17:54:40.039640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.040118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.621 [2024-07-24 17:54:40.040133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.040533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.040931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.040945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.041344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.041813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.041827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.042146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.042543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.042557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.042938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.043331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.043347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.043747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.044151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.044166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.044614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.045002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.045015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.045401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.045846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.045860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.046252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.046643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.046657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.047098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.047513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.047526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.047992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.048379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.048392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.048863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.049264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.049278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.049599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.049991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.050004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.050475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.050940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.050953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.051396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.051806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.051819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.052199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.052624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.052638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.053071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.053536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.053549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.053950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.054416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.054430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.054834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.055249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.055263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.055648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.056090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.056104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.056517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.056991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.057004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.057476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.057943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.057956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.058421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.058760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.058774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.059171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.059512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.059526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.059924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.060369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.060383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.060734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.061143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.061157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.061603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.062066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.062081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.622 qpair failed and we were unable to recover it. 00:29:18.622 [2024-07-24 17:54:40.062473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.622 [2024-07-24 17:54:40.062883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.062897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.063223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.063652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.063665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.064090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.064537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.064551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.064976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.065258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.065273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.065692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.066111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.066134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.066540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.067118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.067152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.067681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.068165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.068228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.068748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.069290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.069334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.069752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.070140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.070161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.070506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.070835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.070849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.071187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.071590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.071604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.072074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.072493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.072507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.072881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.073299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.073313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.073860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.074279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.074293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.074644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.075091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.075105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.075499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.075942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.075955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.076426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.076743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.076756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.076909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.077249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.077262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.077685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.078088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.078102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.078439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.078764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.078777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.079179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.079568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.079581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.080025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.080355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.080369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.080764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.081179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.081193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.081583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.081985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.081998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.082403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.082796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.082809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.083253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.083700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.083713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.084100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.084567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.084581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.084973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.085443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.085457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.623 qpair failed and we were unable to recover it. 00:29:18.623 [2024-07-24 17:54:40.085791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.086122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.623 [2024-07-24 17:54:40.086136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.086479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.086810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.086823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.087267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.087653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.087666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.088130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.088558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.088571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.089056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.089456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.089469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.089819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.090278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.090292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.090637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.091090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.091104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.091498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.091882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.091896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.092215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.092544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.092557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.093000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.093400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.093431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.093806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.094218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.094249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.094624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.095121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.095135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.095588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.095967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.095980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.096216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.096893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.096922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.097369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.097755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.097768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.098160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.098603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.098617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.099082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.099465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.099478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.099885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.100332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.100362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.100809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.101240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.101270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.101704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.102074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.102105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.102465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.102892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.102921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.103344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.103591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.103621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.104040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.104483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.104512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.104956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.105454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.105483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.105983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.106415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.106447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.106875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.107303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.107333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.107759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.108257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.108287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.108720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.109193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.109223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.109647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.110076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.110106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.624 qpair failed and we were unable to recover it. 00:29:18.624 [2024-07-24 17:54:40.110535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.624 [2024-07-24 17:54:40.110952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.110980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.111173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.111612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.111641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.112007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.112368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.112404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.112887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.113301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.113341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.113730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.114120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.114134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.114605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.115022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.115058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.115435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.115855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.115884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.116361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.116723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.116752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.117254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.117442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.117456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.117786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.118174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.118205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.118686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.119169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.119200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.119633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.120055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.120090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.120421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.120679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.120708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.121211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.121708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.121722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.122133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.122561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.122590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.122964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.123271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.123301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.123782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.124197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.124227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.124608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.125016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.125051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.125228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.125677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.125706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.126234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.126672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.126701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.127205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.127624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.127653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.128158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.128518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.128547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.128925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.129286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.129317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.129813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.130222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.130253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.130627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.131036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.131081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.131523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.132016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.132029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.132176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.132647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.132675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.133099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.133549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.133578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.625 qpair failed and we were unable to recover it. 00:29:18.625 [2024-07-24 17:54:40.133960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.625 [2024-07-24 17:54:40.134429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.134443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.134887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.135291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.135321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.135829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.136278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.136317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.136651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.137054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.137084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.137586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.138065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.138096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.138603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.138974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.139003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.139487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.139834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.139863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.140341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.140613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.140642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.140882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.141184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.141214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.141387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.141805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.141834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.142312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.142737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.142766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.143251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.143678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.143707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.144226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.144647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.144676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.145025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.145508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.145537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.146037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.146466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.146495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.146856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.147358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.147389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.147801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.148258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.148272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.148670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.149033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.149072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.149501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.149998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.150027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.150462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.150879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.150907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.151320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.151796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.151825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.152347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.152838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.152868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.153285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.153697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.153727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.153970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.154390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.154426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.626 qpair failed and we were unable to recover it. 00:29:18.626 [2024-07-24 17:54:40.154770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.155089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.626 [2024-07-24 17:54:40.155103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.155500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.155771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.155806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.156229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.156651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.156680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.157191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.157633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.157662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.158095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.158570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.158599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.159028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.159464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.159493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.159737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.160183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.160213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.160575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.160989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.161018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.161526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.162002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.162031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.162535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.162918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.162931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.163395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.163786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.163815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.164318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.164815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.164828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.165198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.165568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.165598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.166016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.166513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.166526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.167002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.167454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.167485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.167940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.168172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.168185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.168583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.169001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.169030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.169459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.169892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.169905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.170375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.170733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.170763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.171187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.171598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.171627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.172137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.172648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.172677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.173108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.173535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.173548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.173939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.174382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.174396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.174632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.175114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.175144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.175578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.176001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.176030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.176552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.176968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.176998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.177480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.177917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.177947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.178449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.178905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.178934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.179359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.179835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.627 [2024-07-24 17:54:40.179864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.627 qpair failed and we were unable to recover it. 00:29:18.627 [2024-07-24 17:54:40.180300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.180731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.180745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.181214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.181541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.181555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.181959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.182426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.182456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.182841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.183335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.183365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.183717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.184148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.184180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.184675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.185051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.185065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.185529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.185931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.185945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.186415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.186864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.186894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.187324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.187757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.187785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.188219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.188603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.188632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.189058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.189311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.189340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.189819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.190312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.190325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.190669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.190827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.190839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.191168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.191618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.191647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.191996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.192467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.192481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.192933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.193374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.193404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.193840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.194260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.194290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.194786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.195173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.195203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.195684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.196060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.196090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.196591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.197007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.197021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.197469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.197943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.197972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.198385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.198811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.198839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.199279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.199778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.199807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.200154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.200610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.200645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.201122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.201624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.201653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.202130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.202544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.202557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.203080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.203509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.203538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.204028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.204476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.204489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-07-24 17:54:40.204734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.205247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.628 [2024-07-24 17:54:40.205261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.629 [2024-07-24 17:54:40.205668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.629 [2024-07-24 17:54:40.206092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.629 [2024-07-24 17:54:40.206122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-07-24 17:54:40.206561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.629 [2024-07-24 17:54:40.206863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.629 [2024-07-24 17:54:40.206894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-07-24 17:54:40.207363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.207801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.207831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.208219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.208728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.208758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.209236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.209661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.209696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.210173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.210592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.210605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.211015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.211336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.211350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.211755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.211905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.211918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.212320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.212749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.212777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.213299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.213667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.213695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.214114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.214506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.214535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.214965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.215457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.215487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.215908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.216247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.216261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.216619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.216986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.217015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.217517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.218013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.218027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.218503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.218926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.218955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.219459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.219925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.219938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.220283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.220679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.220693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.221021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.221499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.221529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.221954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.222408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.222438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.222911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.223240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.223254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.223719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.224060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.224091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.224516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.224989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.225018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.225553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.226027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.226063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.226572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.227056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.227086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.227584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.228025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.228039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.228418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.228872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.895 [2024-07-24 17:54:40.228902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-07-24 17:54:40.229435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.229795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.229824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.230302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.230743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.230785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.231165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.231614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.231643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.232142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.232582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.232611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.233100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.233542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.233571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.234069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.234483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.234511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.234964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.235403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.235434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.235849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.236291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.236335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.236749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.237228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.237242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.237690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.238067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.238096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.238595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.238878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.238907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.239321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.239836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.239865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.240367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.240871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.240884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.241239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.241716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.241746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.242250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.242687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.242716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.243174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.243587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.243616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.243988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.244406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.244436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.244941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.245362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.245393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.245815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.246235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.246249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.246644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.247071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.247101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.247577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.247957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.247987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.248491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.248907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.248920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.249408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.249649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.249679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.250178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.250606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.250635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.251008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.251448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.251478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.251983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.252496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.252526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.253020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.253348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.253378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.253737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.253940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.896 [2024-07-24 17:54:40.253953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.896 qpair failed and we were unable to recover it. 00:29:18.896 [2024-07-24 17:54:40.254373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.254830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.254865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.255356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.255852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.255882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.256358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.256773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.256804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.257231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.257620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.257649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.258076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.258552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.258581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.258952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.259426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.259456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.259937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.260435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.260466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.260904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.261353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.261382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.261886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.262309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.262340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.262839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.263283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.263319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.263762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.264211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.264241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.264721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.265121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.265136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.265576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.266039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.266081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.266607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.267063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.267093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.267573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.267988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.268017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.268503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.268967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.269005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.269283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.269713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.269744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.270028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.270473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.270503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.270929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.271425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.271455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.271900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.272397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.272428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.272632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.273060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.273090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.273583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.274060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.274091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.274521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.275011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.275040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.275555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.275972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.276001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.276520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.276950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.276979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.277411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.277912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.277941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.278308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.278782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.278812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.279241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.279633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.279663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.897 qpair failed and we were unable to recover it. 00:29:18.897 [2024-07-24 17:54:40.280164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.280683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.897 [2024-07-24 17:54:40.280712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.281143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.281619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.281648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.282175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.282650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.282679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.283025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.283459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.283489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.283923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.284405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.284435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.284799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.285226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.285241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.285700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.286079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.286109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.286609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.287108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.287138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.287559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.288003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.288032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.288241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.288692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.288722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.289149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.289516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.289545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.290024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.290529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.290558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.291035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.291539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.291553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.292018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.292511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.292542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.293033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.293545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.293575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.293992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.294282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.294312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.294811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.295436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.295468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.295994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.296483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.296513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.297018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.297466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.297496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.297855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.298220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.298234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.298651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.299052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.299082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.299451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.299952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.299983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.300393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.300889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.300918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.301349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.301778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.301807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.302309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.302491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.302521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.302943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.303302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.303333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.303744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.303980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.303993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.304325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.304669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.304683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.305153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.305862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.898 [2024-07-24 17:54:40.305894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.898 qpair failed and we were unable to recover it. 00:29:18.898 [2024-07-24 17:54:40.306235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.306693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.306706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.307181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.307622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.307636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.308090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.308432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.308446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.308890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.309291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.309321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.309689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.310077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.310107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.310591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.311081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.311113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.311590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.312082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.312113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.312561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.312972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.313002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.313428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.313847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.313860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.314327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.314707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.314736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.315166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.315593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.315622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.316058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.316551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.316581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.317031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.317537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.317567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.317992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.318294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.318324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.318820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.319287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.319319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.319800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.320275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.320306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.320731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.321166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.321198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.321573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.321783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.321796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.322239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.322627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.322641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.322974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.323359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.323373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.323726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.324124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.324154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.324591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.325052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.325084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.899 qpair failed and we were unable to recover it. 00:29:18.899 [2024-07-24 17:54:40.325514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.899 [2024-07-24 17:54:40.325928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.325958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.326375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.326784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.326813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.327292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.327772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.327812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.328231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.328706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.328735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.329227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.329679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.329708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.330176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.330620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.330649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.331166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.331591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.331621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.332110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.332467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.332497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.332920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.333281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.333311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.333811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.334218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.334249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.334727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.335094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.335125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.335630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.336014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.336027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.336480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.336845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.336875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.337308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.337828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.337857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.338232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.338660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.338689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.339138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.339615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.339645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.340134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.340566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.340595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.341038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.341421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.341450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.341876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.342285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.342315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.342762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.343169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.343200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.343631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.344106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.344137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.344577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.345075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.345105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.345493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.345967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.345981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.346314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.346706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.346735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.346988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.347416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.347448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.347864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.348275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.348290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.348775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.349168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.349182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.349641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.350074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.350105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.900 [2024-07-24 17:54:40.350527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.350891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.900 [2024-07-24 17:54:40.350921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.900 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.351360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.351858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.351887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.352253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.352677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.352706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.353066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.353304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.353318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.353729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.354228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.354264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.354710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.355083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.355114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.355524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.355935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.355965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.356464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.356937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.356966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.357359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.357784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.357814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.358249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.358678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.358708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.359123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.359486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.359517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.359956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.360367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.360400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.360832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.361249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.361280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.361716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.362144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.362175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.362671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.363144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.363161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.363633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.363986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.364015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.364444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.364804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.364817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.365266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.365747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.365777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.366212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.366650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.366679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.367109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.367474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.367504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.367931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.368281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.368295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.368766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.369160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.369192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.369509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.369774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.369803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.370227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.370656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.370686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.371155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.371613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.371647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.372085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.372462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.372491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.372914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.373387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.373419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.373897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.374372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.374402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.374827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.375304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.375335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.901 qpair failed and we were unable to recover it. 00:29:18.901 [2024-07-24 17:54:40.375694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.901 [2024-07-24 17:54:40.376193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.376224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.376656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.376890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.376903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.377309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.377785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.377814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.378301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.378727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.378757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.379173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.379611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.379640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.380002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.380445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.380486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.381005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.381491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.381522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.382017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.382446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.382476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.382900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.383338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.383353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.383777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.384247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.384282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.384761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.385022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.385060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.385506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.385933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.385962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.386444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.386825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.386839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.387231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.387720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.387750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.388037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.388522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.388552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.389031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.389544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.389574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.389909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.390391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.390422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.390832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.391252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.391283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.391762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.392259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.392290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.392722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.393158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.393188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.393612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.394115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.394145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.394557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.394947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.394961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.395405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.395783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.395797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.396213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.396650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.396679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.397105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.397602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.397631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.398053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.398416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.398446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.398732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.399219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.399251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.399733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.400232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.400263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.902 [2024-07-24 17:54:40.400697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.401132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.902 [2024-07-24 17:54:40.401146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.902 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.401459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.401945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.401959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.402406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.402744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.402758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.403206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.403699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.403728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.404107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.404500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.404514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.404909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.405392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.405422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.405903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.406145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.406175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.406612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.406884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.406913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.407295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.407713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.407744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.408196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.408642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.408672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.409153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.409577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.409606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.410133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.410491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.410520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.410968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.411407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.411438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.411870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.412346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.412376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.412801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.413236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.413266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.413601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.414075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.414105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.414585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.415001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.415030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.415560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.415896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.415909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.416386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.416756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.416785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.417204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.417703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.417731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.418120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.418506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.418535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.418963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.419376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.419390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.419839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.420245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.420276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.420713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.421138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.421152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.421645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.422062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.422092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.422451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.422953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.422981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.423458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.423891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.423920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.424343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.424839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.424874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.903 [2024-07-24 17:54:40.425351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.425840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.903 [2024-07-24 17:54:40.425869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.903 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.426299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.426801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.426830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.427026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.427443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.427472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.427826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.428286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.428319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.428679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.429173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.429186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.429652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.430151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.430181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.430363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.430736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.430765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.431261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.431560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.431589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.432069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.432544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.432573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.433084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.433329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.433358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.433867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.434290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.434320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.434743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.435069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.435099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.435604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.436100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.436131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.436571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.437014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.437052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.437328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.437751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.437780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.438256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.438729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.438758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.439175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.439532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.439561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.439979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.440357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.440387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.440824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.441324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.441355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.441769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.442193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.442224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.442654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.443091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.443121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.443493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.443909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.443938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.444441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.444922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.444951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.445397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.445894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.445923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.904 [2024-07-24 17:54:40.446428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.446853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.904 [2024-07-24 17:54:40.446882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.904 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.447319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.447768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.447797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.448220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.448711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.448740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.449163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.449584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.449597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.450082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.450503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.450532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.450706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.451091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.451121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.451542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.451891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.451920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.452324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.452653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.452682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.453167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.453607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.453637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.454011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.454441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.454471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.454831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.455256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.455286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.455457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.455808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.455836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.456333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.456748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.456761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.457141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.457658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.457687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.458102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.458476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.458490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.458969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.459311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.459341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.459586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.460090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.460120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.460547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.460902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.460930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.461299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.461769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.461783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.462239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.462655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.462684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.463038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.463493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.463524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.463965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.464488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.464519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.464967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.465414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.465427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.465895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.466263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.466292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.466653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.467135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.467149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.467596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.468081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.468112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.468528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.468962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.468992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.469507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.470004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.470034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.470218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.470614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.470627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.905 qpair failed and we were unable to recover it. 00:29:18.905 [2024-07-24 17:54:40.471120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.905 [2024-07-24 17:54:40.471540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.471570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.472056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.472410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.472438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.472883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.473361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.473391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.473816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.474254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.474268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.474741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.475094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.475124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.475549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.476052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.476082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.476577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.476995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.477024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.477479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.477916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.477945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.478378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.478741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.478771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.479251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.479664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.479678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.480073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.480492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.480521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.481020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.481471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.481501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.481937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.482369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.482383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.482717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.483223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.483254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.483670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.484061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.484091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.484535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.485009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.485038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:18.906 [2024-07-24 17:54:40.485501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.485911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.906 [2024-07-24 17:54:40.485940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:18.906 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.486389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.486818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.486848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.487261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.487749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.487763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.488193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.488690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.488719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.489061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.489562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.489591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.490068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.490510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.490539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.490952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.491361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.491391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.491884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.492303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.492333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.492761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.493172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.493202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.493623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.494131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.494161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.494660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.495105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.495135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.495577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.495933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.495968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.496213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.496712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.496725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.497200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.497655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.497684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.498083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.498506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.498535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.499011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.499477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.499507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.499934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.500393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.500424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.500835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.501220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.501250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.501744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.502154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.502185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.502687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.503111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.503142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.503583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.504066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.504096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.504625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.505119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.505135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.505606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.506030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.506069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.506497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.506940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.506969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.507445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.507863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.507892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.508348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.508825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.508854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.176 qpair failed and we were unable to recover it. 00:29:19.176 [2024-07-24 17:54:40.509264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.176 [2024-07-24 17:54:40.509688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.509717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.510220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.510583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.510612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.511054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.511424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.511453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.511886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.512334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.512364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.512782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.513227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.513256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.513733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.514143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.514163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.514524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.514942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.514971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.515449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.515871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.515901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.516317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.516657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.516686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.517137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.517560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.517590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.518001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.518463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.518494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.519016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.519415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.519429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.519877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.520350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.520380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.520821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.521320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.521350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.521786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.522257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.522286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.522737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.523156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.523186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.523617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.524117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.524131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.524515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.525027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.525064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.525445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.525850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.525878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.526295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.526796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.526825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.527247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.527669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.527710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.528071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.528500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.528530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.529012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.529381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.529411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.529853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.530338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.530369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.530865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.531238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.531268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.531679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.532104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.532134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.532643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.533165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.533195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.533561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.533973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.534001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.177 qpair failed and we were unable to recover it. 00:29:19.177 [2024-07-24 17:54:40.534540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.177 [2024-07-24 17:54:40.534881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.534894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.535340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.535806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.535820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.536288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.536727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.536756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.537170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.537612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.537625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.538024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.538397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.538427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.538924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.539331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.539361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.539726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.540138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.540152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.540561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.541061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.541075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.541292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.541698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.541728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.542163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.542568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.542598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.543077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.543576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.543605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.544105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.544557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.544586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.544954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.545400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.545441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 783880 Killed "${NVMF_APP[@]}" "$@" 00:29:19.178 [2024-07-24 17:54:40.545780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.545938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.545951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 17:54:40 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:19.178 [2024-07-24 17:54:40.546342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 17:54:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:19.178 [2024-07-24 17:54:40.546725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.546756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 17:54:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:19.178 17:54:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:19.178 [2024-07-24 17:54:40.547190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 17:54:40 -- common/autotest_common.sh@10 -- # set +x 00:29:19.178 [2024-07-24 17:54:40.547604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.547635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.547877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.548301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.548331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.548853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.549262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.549292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.549787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.550227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.550259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.550676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.551101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.551132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.551634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.552054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.552084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.552564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.552974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.553003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 17:54:40 -- nvmf/common.sh@469 -- # nvmfpid=784611 00:29:19.178 [2024-07-24 17:54:40.553374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 17:54:40 -- nvmf/common.sh@470 -- # waitforlisten 784611 00:29:19.178 [2024-07-24 17:54:40.553719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.553750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 17:54:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 17:54:40 -- common/autotest_common.sh@819 -- # '[' -z 784611 ']' 00:29:19.178 [2024-07-24 17:54:40.554245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 17:54:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.178 17:54:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:19.178 17:54:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.178 17:54:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:19.178 17:54:40 -- common/autotest_common.sh@10 -- # set +x 00:29:19.178 [2024-07-24 17:54:40.555977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.556006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.556617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.557032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.557058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.178 qpair failed and we were unable to recover it. 00:29:19.178 [2024-07-24 17:54:40.557529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.178 [2024-07-24 17:54:40.557919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.557939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.558428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.558781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.558800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.559225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.559579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.559598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.560023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.560457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.560477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.560856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.561268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.561287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.561703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.562204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.562224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.562580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.563021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.563048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.563507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.563910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.563929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.564292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.564928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.564948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.565370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.565727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.565745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.566097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.566513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.566531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.569054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.569518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.569535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.569939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.570340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.570365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.570808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.571455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.571472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.571806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.572199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.572218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.572718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.573153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.573173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.573573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.573969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.573986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.574372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.574749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.574771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.575179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.575610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.575633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.576114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.576522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.576545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.576842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.577200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.577223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.577622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.578198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.578226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.578473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.578871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.578896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.579401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.579813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.579835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.579999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.580423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.580445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.580964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.581500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.581526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.582213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.582642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.582665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.583154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.583648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.583670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.179 [2024-07-24 17:54:40.584093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.584445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.179 [2024-07-24 17:54:40.584463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.179 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.584876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.585281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.585302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.585710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.586060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.586077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.586521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.586919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.586933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.587445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.587801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.587819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.588248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.588242] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:19.180 [2024-07-24 17:54:40.588287] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.180 [2024-07-24 17:54:40.588628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.588645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.589124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.589528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.589545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.589947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.590335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.590351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.590791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.591235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.591250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.591601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.592049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.592063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.592513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.592853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.592867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.592897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2050200 (9): Bad file descriptor 00:29:19.180 [2024-07-24 17:54:40.593437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.593869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.593885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.594333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.594805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.594819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.595221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.595665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.595679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.596127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.596530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.596544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.596885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.597220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.597235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.180 qpair failed and we were unable to recover it. 00:29:19.180 [2024-07-24 17:54:40.597714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.598030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.180 [2024-07-24 17:54:40.598050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.598438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.598844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.598857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.599107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.599494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.599507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.599998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.600476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.600489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.600935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.601328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.601342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.601743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.602136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.602149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.602616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.602934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.602947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.603353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.603757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.603771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.604093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.604510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.604523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.604909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.605358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.605372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.605772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.606185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.606199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.606609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.606918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.606932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.607416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.607750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.607764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.608163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.608569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.608583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.608993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.609386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.609400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.609811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.610046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.181 [2024-07-24 17:54:40.610061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.181 qpair failed and we were unable to recover it. 00:29:19.181 [2024-07-24 17:54:40.610469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.610937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.610951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.611433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.611835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.611849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.612306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.612752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.612766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.613154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.613550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.613564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.613957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.614350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.614365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.614778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.615161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.615174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.615578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.615895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.615908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.182 [2024-07-24 17:54:40.616298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.616765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.616778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.617275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.617692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.617706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.618099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.618413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.618426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.618881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.619325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.619340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.619736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.620159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.620173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.620379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.620772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.620786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.621210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.621651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.621665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.621993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.622386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.622401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.622817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.623206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.623220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.623552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.623908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.623921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.624309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.624772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.624785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.625194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.625568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.625582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.625964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.626432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.626446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.626894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.627280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.627294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.627626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.628067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.628081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.628482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.628863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.628877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.629267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.629674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.629688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.630131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.630579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.630592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.631037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.631491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.631505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.631897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.632364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.632379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.632765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.633146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.633160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.182 [2024-07-24 17:54:40.633485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.633702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.182 [2024-07-24 17:54:40.633715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.182 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.634108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.634556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.634572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.635219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.635671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.635685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.636092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.636489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.636502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.636959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.637352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.637367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.637786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.637923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.637937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.638379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.638848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.638863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.639297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.639761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.639775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.640246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.640652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.640666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.641006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.641456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.641471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.641960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.642298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.642312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.642726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.643154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.643169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.643615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.644036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.644055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.644217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.644610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.644624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.645297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.645643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.645658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.645992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.646422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.646437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.646837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.647280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.647294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.647742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.648133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.648147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.648470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.648908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.648922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.649357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.649581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.649595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.650071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.650529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.650544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.651014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.651423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.651437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.651773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.652176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.652191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.652642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.652964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.652978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.653368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.653814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.653828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.654223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.654519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.654533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.654979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.655370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.655386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.655619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.656014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.656028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.656468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.656799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.656812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.183 qpair failed and we were unable to recover it. 00:29:19.183 [2024-07-24 17:54:40.657206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.183 [2024-07-24 17:54:40.657604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.657618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.658000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.658406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.658420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.658884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.659276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.659291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.659516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.184 [2024-07-24 17:54:40.659757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.660181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.660199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.660594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.661064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.661079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.661472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.661854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.661868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.662209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.662685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.662699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.663027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.663442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.663457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.663811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.664158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.664172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.664563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.664967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.664981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.665205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.665603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.665617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.665930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.666397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.666411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.666620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.666964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.666977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.667456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.667846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.667860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.668251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.668631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.668647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.669037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.669402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.669416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.669877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.670291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.670306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.670444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.670774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.670787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.671117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.671522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.671536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.671982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.672457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.672471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.672898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.673278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.673293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.673776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.674241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.674255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.674635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.675027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.675046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.675452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.675862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.675876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.676264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.676722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.676735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.677061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.677470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.677484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.677879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.678332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.678347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.678659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.678939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.678953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.184 qpair failed and we were unable to recover it. 00:29:19.184 [2024-07-24 17:54:40.679333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.679776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.184 [2024-07-24 17:54:40.679790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.680173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.680567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.680582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.681061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.681346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.681359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.681748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.682188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.682203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.682592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.682986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.683000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.683418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.683884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.683898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.684366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.684704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.684718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.684926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.685249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.685263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.685658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.686100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.686115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.686563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.687034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.687054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.687465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.687803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.687817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.688218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.688610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.688623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.689018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.689409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.689423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.689574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.689965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.689978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.690382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.690825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.690839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.691299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.691691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.691705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.692169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.692587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.692601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.693068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.693459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.693472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.693938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.694324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.694339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.694805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.695274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.695298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.695639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.696094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.696118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.696600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.696941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.696959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.697354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.697827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.697844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.698235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.698626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.698643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.699092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.699573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.699588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.699988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.700327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.700343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.700744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.701029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.185 [2024-07-24 17:54:40.701048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.185 qpair failed and we were unable to recover it. 00:29:19.185 [2024-07-24 17:54:40.701378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.701795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.701811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.702212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.702544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.702559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.702952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.703337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.703352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.703792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.704214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.704229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.704673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.705117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.705132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.705530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.705873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.705887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.706277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.706601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.706615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.707005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.707448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.707462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.707858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.708253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.708267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.708667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.709151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.709166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.709563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.710029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.710046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.710443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.710771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.710785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.711243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.711712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.711726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.712179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.712590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.712604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.713052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.713479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.713492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.713891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.714333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.714347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.714788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.715148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.715162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.715558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.715977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.715990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.716385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.716779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.716793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.717193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.717646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.717659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.718049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.718453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.718466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.718883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.719048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.719061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.719464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.719843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.719856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.720248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.720671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.720684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.720997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.721464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.721478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.721923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.722310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.722326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.186 qpair failed and we were unable to recover it. 00:29:19.186 [2024-07-24 17:54:40.722481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.186 [2024-07-24 17:54:40.722871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.722885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.723241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.723709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.723723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.724115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.724511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.724528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.724972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.725440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.725454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.725840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.726259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.726273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.726720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.727162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.727176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.727642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.728035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.728059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.728409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.728796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.728809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.729264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.729657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.729670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.730114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.730559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.730572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.730929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.731315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.731330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.731576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:19.187 [2024-07-24 17:54:40.731682] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.187 [2024-07-24 17:54:40.731690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.187 [2024-07-24 17:54:40.731697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.187 [2024-07-24 17:54:40.731725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.731805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:19.187 [2024-07-24 17:54:40.731834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:19.187 [2024-07-24 17:54:40.731941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.187 [2024-07-24 17:54:40.731942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:19.187 [2024-07-24 17:54:40.732197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.732211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.732534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.732868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.732882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.733335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.733800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.733814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.734149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.734550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.734564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.735022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.735433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.735448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.735914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.736313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.736328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.736708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.737094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.737108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.737506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.737841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.737855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.738246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.738621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.738636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.739090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.739537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.739551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.740015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.740417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.740432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.740915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.741305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.741320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.741719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.742174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.742189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.742655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.743097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.743113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.743584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.743987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.744001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.187 qpair failed and we were unable to recover it. 00:29:19.187 [2024-07-24 17:54:40.744451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.744918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.187 [2024-07-24 17:54:40.744933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.745337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.745684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.745698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.746198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.746615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.746631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.747106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.747591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.747608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.748028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.748497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.748517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.749012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.749529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.749545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.750040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.750518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.750535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.751036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.751521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.751537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.751909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.752414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.752430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.752878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.753276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.753290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.753713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.754180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.754195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.754663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.755129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.755144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.755634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.756103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.756118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.756587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.757017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.757031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.757509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.757955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.757974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.758445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.758906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.758920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.759331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.759778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.759793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.760209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.760740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.760754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.761266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.761728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.761742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.762172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.762602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.762616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.763004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.763434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.763450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.188 [2024-07-24 17:54:40.763899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.764310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.188 [2024-07-24 17:54:40.764326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.188 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.764725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.765166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.765181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.765598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.765995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.766009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.766404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.766892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.766910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.767468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.767817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.767832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.768328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.768821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.768835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.769308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.769819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.769833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.770332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.770745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.770758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.771204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.771615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.771628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.772134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.772519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.772533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.772999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.773469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.773483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.773973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.774375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.774389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.774836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.775330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.775344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.775689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.776186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.776214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.776621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.777099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.777114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.777606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.778075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.778090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.778589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.778986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.779001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.779406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.779872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.779886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.780376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.780781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.780795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.781183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.781656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.781671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.782142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.782585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.782600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.510 [2024-07-24 17:54:40.783064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.783545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.510 [2024-07-24 17:54:40.783559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.510 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.784029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.784545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.784559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.785030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.785504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.785519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.785925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.786305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.786321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.786735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.787143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.787158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.787606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.788011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.788025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.788482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.788868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.788882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.789352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.789816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.789830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.790278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.790755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.790768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.791232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.791618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.791631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.792103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.792521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.792534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.793020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.793542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.793556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.793947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.794389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.794402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.794872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.795362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.795376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.795791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.796194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.796209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.796669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.797077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.797091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.797543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.797986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.797999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.798465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.798856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.798869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.799258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.799725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.799738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.800231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.800618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.800632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.801022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.801488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.801501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.801956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.802363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.802377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.802775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.803171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.803185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.803654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.804095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.804109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.804529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.804977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.804991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.805461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.805959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.805972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.806394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.806840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.806854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.807268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.807662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.807675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.808138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.808532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.511 [2024-07-24 17:54:40.808545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.511 qpair failed and we were unable to recover it. 00:29:19.511 [2024-07-24 17:54:40.808990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.809460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.809474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.810104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.810504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.810518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.810983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.811454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.811468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.811864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.812194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.812207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.812639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.813106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.813121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.813471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.813936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.813949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.814276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.814743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.814756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.815133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.815517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.815531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.815963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.816421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.816435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.816960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.817305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.817321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.817767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.818094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.818108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.818500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.818987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.819000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.819483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.819889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.819902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.820361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.820751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.820765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.821143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.821611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.821627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.822077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.822547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.822560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.822952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.823353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.823368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.823759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.824119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.824135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.824529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.824928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.824944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.825352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.825792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.825806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.826213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.826613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.826627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.827040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.827429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.827443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.827839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.828234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.828250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.828712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.829184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.829200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.829610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.830058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.830072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.830468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.830932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.830946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.831441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.831814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.831828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.832274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.832610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.832626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.512 qpair failed and we were unable to recover it. 00:29:19.512 [2024-07-24 17:54:40.832946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.512 [2024-07-24 17:54:40.833415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.833431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.833857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.834257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.834271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.834667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.835077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.835091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.835431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.835907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.835923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.836382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.836851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.836866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.837555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.837955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.837970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.838270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.838716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.838729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.839198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.839746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.839760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.840254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.840619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.840633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.841080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.841552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.841565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.841958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.842420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.842436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.842842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.843322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.843338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.843825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.844226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.844246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.844648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.845114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.845129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.845564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.845930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.845944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.846414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.846818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.846832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.847320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.847721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.847737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.848162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.848578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.848594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.849052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.849383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.849396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.849867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.850270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.850285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.850744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.851151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.851166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.851590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.851996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.852012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.852472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.852851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.852868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.853564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.854062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.854077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.854524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.854935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.854948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.855356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.855747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.855761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.856178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.856572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.856586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.857055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.857543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.857558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.513 [2024-07-24 17:54:40.857947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.858341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.513 [2024-07-24 17:54:40.858357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.513 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.858767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.859154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.859168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.859552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.859942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.859955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.860423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.860864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.860879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.861348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.861755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.861771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.862222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.862629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.862644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.863113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.863582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.863595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.864101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.864544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.864558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.865005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.865486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.865500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.866010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.866461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.866476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.866886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.867311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.867326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.867712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.868185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.868201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.868620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.869088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.869107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.869607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.870093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.870108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.870591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.870988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.871002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.871467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.871862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.871876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.872345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.872861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.872875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.873359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.873764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.873779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.874234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.874707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.874722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.875128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.875526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.875539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.875993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.876468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.876482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.876995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.877464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.877478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.877942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.878386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.878401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.514 qpair failed and we were unable to recover it. 00:29:19.514 [2024-07-24 17:54:40.878872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.514 [2024-07-24 17:54:40.879261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.879276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.879720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.880184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.880206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.880697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.881167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.881181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.881670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.882152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.882167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.882624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.883007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.883020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.883518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.884007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.884024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.884661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.885089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.885103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.885547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.885988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.886002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.886424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.886906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.886920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.887403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.887817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.887831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.888176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.888619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.888633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.889002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.889401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.889416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.889886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.890328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.890342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.890788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.891215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.891229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.891620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.892087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.892101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.892493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.892967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.892983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.893372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.893839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.893853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.894304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.894748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.894761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.895232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.895697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.895711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.896155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.896601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.896615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.897081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.897459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.897473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.897865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.898333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.898347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.898742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.899218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.899232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.899570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.900017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.900030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.900486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.900959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.900972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.901472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.901951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.901967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.902466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.902856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.902869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.903340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.903807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.903820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.515 qpair failed and we were unable to recover it. 00:29:19.515 [2024-07-24 17:54:40.904309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.515 [2024-07-24 17:54:40.904717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.904731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.905172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.905464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.905477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.905942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.906405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.906419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.906900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.907294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.907308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.907780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.908168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.908182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.908539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.908956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.908972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.909384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.909734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.909749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.910194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.910593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.910609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.911140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.911587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.911601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.912076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.912479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.912492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.912853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.913252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.913266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.913646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.914088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.914104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.914504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.914888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.914902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.915347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.915727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.915740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.916165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.916636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.916649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.917039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.917467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.917481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.917824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.918222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.918238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.918613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.919060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.919074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.919520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.919850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.919864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.920333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.920799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.920814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.921284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.921697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.921711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.922164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.922701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.922715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.923184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.923547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.923562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.923984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.924394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.924409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.924755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.925148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.925163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.925584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.926063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.926078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.926498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.926839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.926854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.927339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.927691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.927705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.928137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.928604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.516 [2024-07-24 17:54:40.928619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.516 qpair failed and we were unable to recover it. 00:29:19.516 [2024-07-24 17:54:40.929048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.929495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.929508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.929833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.930298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.930312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.930762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.931156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.931170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.931633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.932051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.932065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.932464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.932930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.932943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.933399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.933806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.933819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.934273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.934703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.934717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.935095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.935475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.935488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.935898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.936354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.936369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.936776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.937223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.937237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.937634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.938105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.938120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.938566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.939061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.939077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.939478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.939958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.939972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.940442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.940836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.940850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.941257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.941711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.941724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.942170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.942570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.942583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.943087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.943488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.943502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.943893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.944359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.944373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.944726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.945118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.945130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.945733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.946236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.946249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.946727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.947124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.947138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.947539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.947885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.947898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.948296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.948704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.948717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.949178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.949603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.949617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.950018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.950482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.950496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.950911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.951341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.951355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.951776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.952223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.952236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.952572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.953049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.953062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-07-24 17:54:40.953577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-07-24 17:54:40.954068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.954082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.954507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.954917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.954931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.955370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.955718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.955732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.956225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.956560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.956574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.956996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.957385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.957399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.957779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.958278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.958292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.958702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.959072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.959086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.959772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.960261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.960275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.960672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.961064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.961077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.961428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.961896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.961910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.962311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.962704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.962717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.963115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.963510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.963523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.964019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.964492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.964506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.964915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.965338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.965352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.965745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.966211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.966225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.966563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.966909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.966922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.967509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.967997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.968011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.968635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.969050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.969065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.969455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.969862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.969875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.970284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.970661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.970674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.971068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.971416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.971429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff704000b90 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.971982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.972654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.972675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.973151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.973501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.973515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.973947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.974362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.974378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.974779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.975236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.975251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.975650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.976125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.976141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.976538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.977029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.977047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.977399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.977752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.977766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.978212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.978633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-07-24 17:54:40.978647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-07-24 17:54:40.979086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.979533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.979547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.979998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.980450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.980464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.980913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.981362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.981376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.981820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.982251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.982265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.982598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.982995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.983008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.983456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.983842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.983856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.984328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.984731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.984745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.985229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.985620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.985633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.985969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.986398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.986413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.986813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.987297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.987311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.987712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.988179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.988195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.988639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.989037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.989056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.989463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.990120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.990138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.990555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.991006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.991020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.991385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.991734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.991747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.992174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.992664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.992678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.993163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.993514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.993527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.994016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.994383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.994398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.994790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.995258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.995272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.995717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.996175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.996190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.996656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.997117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.997131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.997525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.997916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.997930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.998405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.998824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.998841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:40.999272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.999662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:40.999676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-07-24 17:54:41.000019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-07-24 17:54:41.000433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.000447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.000840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.001232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.001247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.001641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.002049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.002063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.002453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.002846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.002860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.003267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.003620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.003634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.004027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.004520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.004534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.005027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.005440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.005454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.005928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.006355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.006369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.006794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.007201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.007215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.007670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.008308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.008323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.008786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.009231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.009246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.009718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.010179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.010193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.010615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.011008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.011022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.011463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.011864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.011877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.012291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.012737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.012751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.013203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.013651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.013664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.014110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.014461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.014474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.014920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.015318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.015332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.015758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.016223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.016238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.016688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.017146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.017161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.017566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.018029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.018047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.018440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.018843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.018857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.019322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.019715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.019728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.020132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.020475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.020489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.020883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.021359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.021373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.021777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.022241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.022256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.022703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.023167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.023181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.023576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.023995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.024009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.024409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.024982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-07-24 17:54:41.024996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-07-24 17:54:41.025334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.025866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.025879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.026276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.026731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.026745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.027192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.027637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.027651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.028250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.028672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.028686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.029162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.029490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.029503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.029973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.030407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.030422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.030820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.031279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.031293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.031741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.032094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.032109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.032554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.033002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.033016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.033484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.033836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.033850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.034369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.034795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.034809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.035252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.035579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.035592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.036046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.036441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.036455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.036945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.037419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.037433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.037921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.038418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.038432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.038998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.039391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.039404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.039813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.040265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.040280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.040620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.041026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.041040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.041418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.041840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.041853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.042313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.042710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.042724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.043201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.043595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.043612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.044103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.044452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.044466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.044869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.045327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.045342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.045787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.046353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.046368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.046817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.047270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.047284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.047692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.048168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.048182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.048524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.049063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.049077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.049562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.049949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.049963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-07-24 17:54:41.050442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.050863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-07-24 17:54:41.050877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.051341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.051689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.051703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.052103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.052498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.052512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.053002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.053408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.053423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.053916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.054309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.054323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.054721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.055182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.055196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.055617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.056053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.056067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.056563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.057000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.057014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.057416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.057899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.057912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.058356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.058769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.058783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.059236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.059640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.059654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.059973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.060416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.060430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.060831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.061290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.061304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.061737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.062215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.062230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.062575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.063059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.063073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.063427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.063774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.063787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.064231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.064565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.064579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.065028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.065521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.065536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.065932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.066328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.066342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.066762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.067173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.067187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.067532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.068008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.068021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.068420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.068763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.068777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.069207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.069606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.069620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.070016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.070444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.070459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.070853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.071245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.071260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.071613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.072094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.072108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.072450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.072831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.072844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.073202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.073541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.073555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.074101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.074542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.074556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-07-24 17:54:41.075235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-07-24 17:54:41.075570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.075584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.076104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.076498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.076512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.076928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.077370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.077384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.077718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.078142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.078156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.078486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.078828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.078842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.079249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.079588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.079602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.080131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.080582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.080597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.081090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.081538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.081551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.082077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.082500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.082513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.082965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.083391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.083405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.083830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.084218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.084233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.084582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.084915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.084929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.085399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.085795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.085809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.086274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.086662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.086676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.087152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.087696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.087713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.088141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.088547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.088560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.088890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.089286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.089300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.089689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.090087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.090101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.090444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.090782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.090795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.091141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.091586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.091600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.092095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.092518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.092531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.092881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.093276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.093290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.093641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.094122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.094136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.094488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.094901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.094915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.095315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.095659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.095673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.096191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.096735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.096749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.097144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.097549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.097563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.098009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.098431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.098445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.098887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.099290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-07-24 17:54:41.099304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-07-24 17:54:41.099701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.100175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.100190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.100530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.100868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.100882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.101314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.101713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.101726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.102198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.102609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.102623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.103019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.103420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.103435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.103779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.104228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.104242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.104587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.105082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.105096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.105497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.105920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.105933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.106413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.106928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.106942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.107465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.107913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.107927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.108370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.108721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.108734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.109158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.109554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.109567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.110005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.110485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.110500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.110855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.111205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.111220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.111571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.112091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.112105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.112448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.112845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.112859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.113306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.113699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.113713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.114192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.114595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.114609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.115053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.115543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.115557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.116039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.116535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.116550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.116951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.117338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.117352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.791 [2024-07-24 17:54:41.117823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.118274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.791 [2024-07-24 17:54:41.118289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.791 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.118688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.119181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.119195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.119663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.120110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.120124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.120475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.120817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.120831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.121325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.121770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.121783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.122231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.122633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.122646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.123074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.123547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.123561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.124002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.124482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.124497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.124883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.125226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.125240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.125630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.126031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.126049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.126451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.126923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.126937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.127379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.127722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.127736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.128140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.128539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.128552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.129012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.129445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.129460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.129789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.130210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.130224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.130670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.131139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.131157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.131654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.132125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.132139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.132610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.133040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.133058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.133490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.133891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.133903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.134314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.134706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.134718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.135188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.135639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.135652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.136132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.136526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.136540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.136968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.137387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.137402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.137792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.138253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.138267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.138665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.139090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.139103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.139450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.139778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.139795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.140400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.140785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.140799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.141227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.141616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.792 [2024-07-24 17:54:41.141629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.792 qpair failed and we were unable to recover it. 00:29:19.792 [2024-07-24 17:54:41.142054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.142398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.142411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.142764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.143363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.143377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.143870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.144299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.144313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.144726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.145132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.145147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.145497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.145842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.145856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.146307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.146777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.146791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.147187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.147593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.147606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.148075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.148502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.148516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.148960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.149492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.149506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.149909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.150307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.150321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.150783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.151188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.151202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.151551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.152023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.152037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.152537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.153012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.153026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.153440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.153935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.153949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.154370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.154839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.154852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.155350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.155745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.155758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.156224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.156616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.156630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.157050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.157499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.157513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.157866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.158315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.158329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.158732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.159066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.159081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.159427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.159821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.159835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.160304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.160697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.160710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.161195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.161536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.161550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.161889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.162312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.162327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.162732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.163204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.163218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.163622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.164006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.164020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.164489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.164937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.164950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.165381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.165729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.165742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.793 qpair failed and we were unable to recover it. 00:29:19.793 [2024-07-24 17:54:41.166128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.793 [2024-07-24 17:54:41.166478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.166492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.166829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.167293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.167308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.167636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.168090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.168104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.168523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.168916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.168930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.169330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.169780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.169794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.170275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.170763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.170777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.171171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.171520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.171534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.171886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.172376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.172390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.172837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.173246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.173260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.173658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.174117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.174132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.174480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.174830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.174845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.175252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.175647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.175662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.176073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.176482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.176495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.176887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.177302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.177317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.177769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.178180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.178194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.178787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.179212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.179226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.179619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.180111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.180125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.180543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.180983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.180997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.181455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.181903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.181917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.182315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.182710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.182724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.183136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.183551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.183567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.183980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.184412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.184426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.184814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.185321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.185335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.185684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.186107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.186121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.186509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.186907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.186921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.187411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.187836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.187849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.188342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.188692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.188705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.189102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.189522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.189536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.190059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.190514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.794 [2024-07-24 17:54:41.190528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.794 qpair failed and we were unable to recover it. 00:29:19.794 [2024-07-24 17:54:41.191066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.191486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.191499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.191848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.192348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.192368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.192799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.193279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.193293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.193720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.194189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.194204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.194545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.194897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.194911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.195348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.195756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.195769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.196258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.196710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.196724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.197261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.197782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.197795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.198208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.198672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.198686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.199153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.199597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.199610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.200132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.200640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.200654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.201104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.201499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.201513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.202025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.202548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.202563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.202973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.203415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.203430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.203825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.204213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.204228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.204634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.205028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.205041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.205447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.205850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.205864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.206298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.206645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.206658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.207101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.207547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.207561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.207898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.208366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.208380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.208801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.209214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.209228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.209655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.210095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.210109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.795 qpair failed and we were unable to recover it. 00:29:19.795 [2024-07-24 17:54:41.210518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.210946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.795 [2024-07-24 17:54:41.210959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.211448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.211854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.211868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.212321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.212707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.212720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.213192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.213616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.213630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.214047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.214519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.214533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.215015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.215473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.215487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.216123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.216647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.216660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.217071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.217506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.217521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.218017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.218407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.218421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.218808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.219218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.219233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.219629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.220103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.220118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.220515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.220877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.220890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.221387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.221821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.221835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.222276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.222632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.222645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.223040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.223514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.223528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.223870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.224338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.224353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.224825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.225279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.225293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.225635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.226037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.226059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.226444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.226867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.226881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.227315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.227712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.227726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.228115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.228536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.228552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.228952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.229356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.229370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.229761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.230223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.230237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.230932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.231413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.231427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.231825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.232287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.232301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.232695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.233124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.233138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.233554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.234023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.234036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.234533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.234930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.796 [2024-07-24 17:54:41.234944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.796 qpair failed and we were unable to recover it. 00:29:19.796 [2024-07-24 17:54:41.235384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.235796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.235810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.236413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.236831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.236844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.237251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.237696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.237709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.238180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.238656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.238670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.239310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.239758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.239771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.240159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.240560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.240573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.241020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.241447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.241462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.241949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.242416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.242431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.242834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.243229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.243244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.243585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.243971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.243984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.244445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.244796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.244810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.245198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.245604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.245618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.246091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.246423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.246437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.246862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.247264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.247278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.247709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.248056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.248070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.248465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.248807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.248820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.249215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.249660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.249674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.250029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.250245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.250259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.250500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.250879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.250893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.251283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.251734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.251749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.252250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.252667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.252683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.253028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.253364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.253378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.253862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.254301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.254315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.254662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.255060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.255075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.255325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.255718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.255732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.256073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.256463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.256477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.256805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.257207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.257222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.257588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.257910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.257924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.797 [2024-07-24 17:54:41.258162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.258493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.797 [2024-07-24 17:54:41.258507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.797 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.258949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.259284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.259298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.259743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.260081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.260096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.260496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.260812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.260826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.261170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.261564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.261578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.261981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.262380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.262395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.262614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.263054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.263068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.263472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.263843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.263860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.264246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.264472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.264485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.264929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.265326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.265340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.265683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.266088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.266102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.266432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.266760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.266774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.267242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.267637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.267651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.267989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.268320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.268335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.268715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.269059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.269074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.269464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.269861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.269878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.270033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.270357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.270371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.270844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.271181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.271195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.271526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.271922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.271936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.272366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.272768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.272782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.273172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.273578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.273591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.273925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.274255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.274269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.274608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.274930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.274944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.275334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.275742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.275756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.276103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.276480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.276494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.276840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.277182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.277196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.277616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.278011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.278025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.278421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.278814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.278828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.279212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.279524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.279537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.798 qpair failed and we were unable to recover it. 00:29:19.798 [2024-07-24 17:54:41.279856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.280230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.798 [2024-07-24 17:54:41.280244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.280706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.281116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.281130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.281461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.281801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.281815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.282161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.282487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.282501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.282896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.283227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.283242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.283449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.283758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.283772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.284089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.284496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.284511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.284886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.285217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.285231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.285631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.286165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.286179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.286797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.287215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.287229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.287556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.287944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.287957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.288294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.288696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.288710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.288922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.289332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.289346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.289733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.290201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.290215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.290580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.290910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.290924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.291336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.291796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.291816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.292214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.292541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.292554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.292893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.293338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.293353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.293974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.294332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.294354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.294706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.295093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.295108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.295439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.295694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.295708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.296102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.296449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.296465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.296808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.297196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.297210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.297594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.297977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.297991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.298376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.298704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.298718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.299062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.299385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.299399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.299774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.300158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.300173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.300569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.301018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.301032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.301548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.301929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.799 [2024-07-24 17:54:41.301943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.799 qpair failed and we were unable to recover it. 00:29:19.799 [2024-07-24 17:54:41.302290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.302611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.302625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.303021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.303370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.303385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.303667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.304084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.304098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.304572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.304954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.304968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.305348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.305677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.305691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.306026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.306364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.306377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.306765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.307104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.307118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.307328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.307702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.307715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.308030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.308392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.308410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.308824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.309214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.309228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.309719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.310098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.310112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.310355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.310680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.310693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.311019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.311246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.311260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.311590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.311918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.311932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.312352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.312804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.312818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.313159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.313501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.313514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.313830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.314159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.314173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.314560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.314958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.314971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.315362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.315692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.315709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.316039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.316559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.316573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.316919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.317302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.317317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.317766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.318179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.318193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.318568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.318956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.318971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.319352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.319685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.319699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.800 qpair failed and we were unable to recover it. 00:29:19.800 [2024-07-24 17:54:41.320031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.800 [2024-07-24 17:54:41.320367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.320381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.320762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.321094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.321109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.321494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.321901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.321914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.322313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.322690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.322704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.323162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.323489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.323503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.323907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.324232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.324246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.324636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.325050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.325064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.325450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.325834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.325847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.326169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.326615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.326629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.327018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.327161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.327175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.327568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.327883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.327897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.328419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.328797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.328810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.329196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.329543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.329557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.329958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.330355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.330369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.330749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.331093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.331107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.331442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.331859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.331873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.332274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.332594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.332608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.332994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.333442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.333456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.333837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.334298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.334313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.334722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.335146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.335161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.335573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.336019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.336033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.336479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.336806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.336819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.337168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.337511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.337525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.337913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.338254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.338269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.338673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.339094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.339110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.339446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.339786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.339800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.340140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.340511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.340525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.340932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.341271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.341285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.341642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.341981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.801 [2024-07-24 17:54:41.341995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.801 qpair failed and we were unable to recover it. 00:29:19.801 [2024-07-24 17:54:41.342334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.342728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.342742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.343216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.343552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.343566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.343886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.344202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.344217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.344663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.344981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.344994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.345224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.345554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.345567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.345974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.346357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.346372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.346777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.347249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.347264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.347602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.347928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.347941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.348168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.348495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.348509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.348887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.353443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.353458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.353904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.354278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.354294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.354671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.355090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.355105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.355573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.356046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.356062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.356454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.356780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.356794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.357239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.357632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.357646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.358025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.358503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.358517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.358900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.359287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.359304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.359770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.360233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.360248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.360632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.361030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.361048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.361386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.361789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.361803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.362192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.362532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.362546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.362921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.363306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.363320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.363720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.364105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.364119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.364577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.364967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.364981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.802 qpair failed and we were unable to recover it. 00:29:19.802 [2024-07-24 17:54:41.365375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.802 [2024-07-24 17:54:41.365750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.365764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.366176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.366564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.366578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.367050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.367458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.367472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.367816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.368147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.368161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.368489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.368935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.368948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.369391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.369731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.369745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.370134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.370537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.370550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.370959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.371402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.371417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.371649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.372068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.372082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.372568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.373053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.373067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.373463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.373928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.373942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.374345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.374749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.374763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.375230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.375695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.375709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.376104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.376491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.376506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.376996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.377393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.377408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.377751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.378128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.378142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.378350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.378758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.378772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.379250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.379646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.379660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.380060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.380528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.380543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.380930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.381346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.381360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:19.803 [2024-07-24 17:54:41.381827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.382280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.803 [2024-07-24 17:54:41.382294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:19.803 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.382686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.383108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.383122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.383545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.384009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.384023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.384443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.384911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.384924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.385144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.385535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.385549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.386034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.386483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.386497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.386943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.387337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-24 17:54:41.387352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.067 qpair failed and we were unable to recover it. 00:29:20.067 [2024-07-24 17:54:41.387770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.388239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.388253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.388648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.389105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.389120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.389509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.389905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.389918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.390382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.390587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.390600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.390750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.391097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.391111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.391507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.391840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.391853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.392263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.392604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.392617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.393076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.393488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.393502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.393954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.394403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.394417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.394886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.395300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.395314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.395689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.396081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.396095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.396497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.396897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.396910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.397403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.397869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.397882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.398372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.398791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.398804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.399297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.399685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.399698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.400111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.400582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.400596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.401102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.401560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.401576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.401973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.402432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.402446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.402987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.403481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.403495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.403890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.404285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.404299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.404511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.404969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.404982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.405453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.405891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.405905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.406281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.406625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.406638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.407105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.407452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.407465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.407858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.408255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.408270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.408686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.409025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.409039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.409489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.409948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.409962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.410414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.410801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.410814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.411282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.411752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.411765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.412258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.412719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.412732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.413117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.413593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.413607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.414097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.414499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.414513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.414969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.415415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.415429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.415875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.416270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.416284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.416731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.417199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.417213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.417700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.418095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.418109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.418395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.418872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.418886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.419389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.419732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.419745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.420137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.420603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.420617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.420951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 17:54:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.068 17:54:41 -- common/autotest_common.sh@852 -- # return 0 00:29:20.068 [2024-07-24 17:54:41.421411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.421428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 17:54:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 17:54:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:20.068 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.068 [2024-07-24 17:54:41.421922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.422336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.422351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.422798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.423211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.423225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.423674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.424116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.424130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.424518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.424961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.424975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.425442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.425909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-24 17:54:41.425923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.068 qpair failed and we were unable to recover it. 00:29:20.068 [2024-07-24 17:54:41.426316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.426759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.426774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.427218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.427634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.427648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.428115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.428530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.428544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.428865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.429262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.429277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.429746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.430137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.430152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.430494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.431005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.431020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.431485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.431854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.431868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.432270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.432713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.432728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.433119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.433501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.433514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.434005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.434405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.434420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.434902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.435282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.435297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.435740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.436198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.436216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.436659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.437054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.437069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.437475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.437944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.437957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.438430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.438866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.438879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.439304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.439595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.439608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.440010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.440403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.440417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.440817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.441201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.441215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.441602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.442071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.442085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.442504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.442921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.442935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.443349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.443746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.443760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.444177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.444589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.444603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.445077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.445491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.445505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.445852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.446294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.446309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.446761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.447229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.447245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.447585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.448026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.448040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.448418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.448809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.448823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.449251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.449641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.449657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.450078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.450429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.450443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.450783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.451382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.451396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.451796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.452193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.452208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.452629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.453039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.453058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.453411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.453807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.453821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.454300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 17:54:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.069 [2024-07-24 17:54:41.454721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.454737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 17:54:41 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.069 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.069 [2024-07-24 17:54:41.455244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.069 [2024-07-24 17:54:41.455642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.455657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.456068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.456518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.456532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.456974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.457420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.457433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.457801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.458283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.458298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.458652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.459069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.459083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.459423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.459924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.459938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.460416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.460766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.460781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.461198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.461595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.461610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.462131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.462467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.462481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.462877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.463209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.463224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.463621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.464172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.464188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.464656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.465124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.465140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.465633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.466052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.466067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.466410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.466758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.466773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.467258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.467601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.467617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.468081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.468508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.468526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.468963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.469417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.069 [2024-07-24 17:54:41.469435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.069 qpair failed and we were unable to recover it. 00:29:20.069 [2024-07-24 17:54:41.469892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.470298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.470318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.470716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.471057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.471074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.471519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.471851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.471865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.472334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.472736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.472750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.473206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.473551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.473565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 Malloc0 00:29:20.070 [2024-07-24 17:54:41.473895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.070 [2024-07-24 17:54:41.474273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.474288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 17:54:41 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.070 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 [2024-07-24 17:54:41.474676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.475067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.475081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.475477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.475883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.475896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.476388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.476789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.476803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.477273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.477455] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.070 [2024-07-24 17:54:41.477715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.477729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.478201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.478599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.478612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.479010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.479478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.479493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.479964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.480420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.480434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.480834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.481244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.481258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.481605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.481936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.481950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.482403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.482872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.482885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.483287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.483752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.483765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.484158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.484607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.484621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.485087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.485495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.485509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.070 17:54:41 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.070 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.070 [2024-07-24 17:54:41.485900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 [2024-07-24 17:54:41.486297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.486312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.486757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.487200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.487214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.487646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.488046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.488060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.488436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.488905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.488919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.489315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.489800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.489813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.490207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.490636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.490650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.491094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.491473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.491487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.491901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.492232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.492247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.492658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.493136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.493150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.493537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.070 17:54:41 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:20.070 [2024-07-24 17:54:41.493914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.493928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.070 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 [2024-07-24 17:54:41.494316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.494802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.494815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.495213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.495610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.495623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.496083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.496465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.496479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.496875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.497268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.497282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.497726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.498101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.498115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.498587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.499030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.499049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.499493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.499957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.499970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.500321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.500712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.500725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.501194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.501675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.501689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.070 17:54:41 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.070 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.070 [2024-07-24 17:54:41.502028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.070 [2024-07-24 17:54:41.502446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.502461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.502879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.503343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.503358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.503754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.504233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.504265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.504555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.504953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.070 [2024-07-24 17:54:41.504967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.070 qpair failed and we were unable to recover it. 00:29:20.070 [2024-07-24 17:54:41.505426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.071 [2024-07-24 17:54:41.505685] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.071 [2024-07-24 17:54:41.505825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.071 [2024-07-24 17:54:41.505839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2042710 with addr=10.0.0.2, port=4420 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.508123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.508324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.508351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.508362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.508372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.508398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.071 17:54:41 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.071 17:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.071 17:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:20.071 17:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.071 17:54:41 -- host/target_disconnect.sh@58 -- # wait 783907 00:29:20.071 [2024-07-24 17:54:41.518019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.518360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.518384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.518394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.518404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.518430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.528130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.528296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.528315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.528322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.528328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.528346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.537961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.538110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.538130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.538137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.538143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.538160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.548079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.548233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.548252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.548258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.548265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.548281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.558029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.558213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.558232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.558239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.558245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.558262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.568143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.568276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.568298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.568306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.568312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.568328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.578143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.578278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.578297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.578304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.578310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.578327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.588215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.588348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.588367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.588374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.588380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.588397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.598141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.598271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.598289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.598296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.598302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.598318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.608244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.608383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.608402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.608408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.608418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.608435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.618244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.618383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.618403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.618409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.618416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.618433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.628228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.628362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.628380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.628387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.628393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.628409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.638494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.638625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.638643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.638650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.638657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.638673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.648346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.648495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.648513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.648519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.648526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.648542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.071 [2024-07-24 17:54:41.658408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.071 [2024-07-24 17:54:41.658593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.071 [2024-07-24 17:54:41.658611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.071 [2024-07-24 17:54:41.658618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.071 [2024-07-24 17:54:41.658624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.071 [2024-07-24 17:54:41.658641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.071 qpair failed and we were unable to recover it. 00:29:20.331 [2024-07-24 17:54:41.668336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.331 [2024-07-24 17:54:41.668470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.331 [2024-07-24 17:54:41.668488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.331 [2024-07-24 17:54:41.668495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.331 [2024-07-24 17:54:41.668502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.331 [2024-07-24 17:54:41.668519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.331 qpair failed and we were unable to recover it. 00:29:20.331 [2024-07-24 17:54:41.678423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.331 [2024-07-24 17:54:41.678558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.331 [2024-07-24 17:54:41.678576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.331 [2024-07-24 17:54:41.678583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.331 [2024-07-24 17:54:41.678589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.678606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.688433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.688565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.688583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.688590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.688596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.688611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.698453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.698587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.698606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.698613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.698623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.698639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.708522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.708667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.708686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.708693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.708699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.708716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.718469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.718602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.718620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.718627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.718634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.718649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.728547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.728680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.728697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.728704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.728711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.728727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.738665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.738812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.738830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.738837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.738843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.738859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.748685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.748827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.748845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.748852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.748858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.748874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.758724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.758857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.758875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.758882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.758888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.758904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.768744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.768883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.768901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.768908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.768914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.768930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.778642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.778777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.778795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.778803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.778809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.778825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.788683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.788835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.788853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.788859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.788873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.788889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.798787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.798918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.798936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.798943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.798949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.798965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.332 qpair failed and we were unable to recover it. 00:29:20.332 [2024-07-24 17:54:41.808816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.332 [2024-07-24 17:54:41.808963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.332 [2024-07-24 17:54:41.808981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.332 [2024-07-24 17:54:41.808988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.332 [2024-07-24 17:54:41.808994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.332 [2024-07-24 17:54:41.809011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.818836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.818974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.818992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.818999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.819005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.819021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.828931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.829092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.829111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.829118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.829124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.829140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.838899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.839036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.839060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.839067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.839073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.839089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.848933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.849071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.849090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.849096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.849102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.849119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.858943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.859083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.859101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.859108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.859114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.859130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.868969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.869113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.869131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.869138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.869144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.869160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.879009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.879322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.879340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.879350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.879357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.879373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.889028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.889164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.889183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.889190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.889195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.889211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.899080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.899216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.899234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.899241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.899247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.899263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.909099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.909240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.909258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.909265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.909271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.909288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.333 [2024-07-24 17:54:41.919133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.333 [2024-07-24 17:54:41.919278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.333 [2024-07-24 17:54:41.919297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.333 [2024-07-24 17:54:41.919304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.333 [2024-07-24 17:54:41.919310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:20.333 [2024-07-24 17:54:41.919327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.333 qpair failed and we were unable to recover it. 00:29:20.594 [2024-07-24 17:54:41.929177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.929347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.929375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.929386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.929395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.929420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.939242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.939388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.939407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.939414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.939420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.939438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.949213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.949350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.949368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.949375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.949382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.949399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.959296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.959446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.959464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.959472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.959478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.959495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.969273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.969407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.969425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.969435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.969441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.969459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.979262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.979396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.979414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.979421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.979428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.979445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.989336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.989470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.989488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.989495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.989501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.989518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:41.999354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:41.999485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:41.999503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:41.999510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:41.999516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:41.999533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.009358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:42.009486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:42.009504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:42.009511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:42.009517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:42.009534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.019433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:42.019568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:42.019586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:42.019593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:42.019599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:42.019616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.029473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:42.029603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:42.029621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:42.029628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:42.029634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:42.029651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.039484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:42.039619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:42.039637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:42.039644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:42.039650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:42.039666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.049547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.595 [2024-07-24 17:54:42.049702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.595 [2024-07-24 17:54:42.049719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.595 [2024-07-24 17:54:42.049726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.595 [2024-07-24 17:54:42.049731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.595 [2024-07-24 17:54:42.049749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.595 qpair failed and we were unable to recover it. 00:29:20.595 [2024-07-24 17:54:42.059474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.059608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.059626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.059636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.059642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.059658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.069574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.069712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.069730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.069737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.069743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.069760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.079609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.079742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.079760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.079767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.079773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.079789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.089641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.089795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.089813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.089819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.089826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.089842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.099675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.099810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.099828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.099835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.099841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.099858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.109810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.109939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.109957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.109963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.109969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.109986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.119705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.119842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.119860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.119867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.119873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.119890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.129678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.129811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.129829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.129836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.129842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.129859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.139780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.139915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.139933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.139940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.139946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.139963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.149790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.149923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.149944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.149951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.149957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.149974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.159819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.159954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.159971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.159978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.159984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.160001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.169849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.169994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.170011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.170018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.170024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.170041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.179889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.180024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.180046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.180054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.596 [2024-07-24 17:54:42.180060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.596 [2024-07-24 17:54:42.180077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.596 qpair failed and we were unable to recover it. 00:29:20.596 [2024-07-24 17:54:42.189885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.596 [2024-07-24 17:54:42.190024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.596 [2024-07-24 17:54:42.190047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.596 [2024-07-24 17:54:42.190055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.597 [2024-07-24 17:54:42.190061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.597 [2024-07-24 17:54:42.190082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.597 qpair failed and we were unable to recover it. 00:29:20.857 [2024-07-24 17:54:42.199945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.857 [2024-07-24 17:54:42.200087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.857 [2024-07-24 17:54:42.200105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.857 [2024-07-24 17:54:42.200113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.857 [2024-07-24 17:54:42.200119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.857 [2024-07-24 17:54:42.200136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-07-24 17:54:42.209968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.857 [2024-07-24 17:54:42.210123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.857 [2024-07-24 17:54:42.210141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.857 [2024-07-24 17:54:42.210148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.857 [2024-07-24 17:54:42.210154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.857 [2024-07-24 17:54:42.210171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.857 qpair failed and we were unable to recover it. 00:29:20.857 [2024-07-24 17:54:42.220019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.857 [2024-07-24 17:54:42.220159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.857 [2024-07-24 17:54:42.220178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.857 [2024-07-24 17:54:42.220184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.220191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.220207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.230027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.230165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.230183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.230190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.230196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.230213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.240065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.240200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.240221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.240228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.240234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.240250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.250110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.250245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.250263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.250270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.250276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.250293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.260109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.260291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.260309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.260315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.260321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.260339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.270169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.270309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.270327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.270334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.270340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.270358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.280184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.280321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.280339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.280345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.280354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.280371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.290227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.290359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.290377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.290383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.290390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.290407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.300248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.300378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.300396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.300402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.300409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.300425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.310208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.310345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.310363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.310370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.310376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.310393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.320305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.320437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.320455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.320462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.320468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.320485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.330334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.330476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.330493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.330500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.330506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.330523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.340378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.340514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.340532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.340538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.340545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.858 [2024-07-24 17:54:42.340561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 qpair failed and we were unable to recover it. 00:29:20.858 [2024-07-24 17:54:42.350394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.858 [2024-07-24 17:54:42.350531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.858 [2024-07-24 17:54:42.350549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.858 [2024-07-24 17:54:42.350555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.858 [2024-07-24 17:54:42.350561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.350578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.360411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.360542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.360560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.360567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.360573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.360590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.370432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.370568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.370586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.370593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.370602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.370619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.380391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.380538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.380556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.380563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.380569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.380586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.390434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.390569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.390587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.390594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.390600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.390616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.400463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.400638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.400655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.400662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.400668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.400685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.410542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.410685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.410702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.410709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.410715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.410732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.420563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.420701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.420719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.420725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.420731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.420748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.430559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.430693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.430711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.430717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.430724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.430741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.440652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.440777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.440795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.440802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.440808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.440825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:20.859 [2024-07-24 17:54:42.450600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.859 [2024-07-24 17:54:42.450726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.859 [2024-07-24 17:54:42.450744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.859 [2024-07-24 17:54:42.450750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.859 [2024-07-24 17:54:42.450757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:20.859 [2024-07-24 17:54:42.450774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.460648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.460789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.460807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.460818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.460824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.460841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.470712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.470844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.470861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.470868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.470874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.470891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.480835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.481010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.481028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.481035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.481041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.481063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.490790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.491118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.491136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.491143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.491149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.491166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.500810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.500942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.500959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.500966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.500972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.500988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.510829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.510964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.510982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.510989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.510995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.511011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.520804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.520942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.520960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.520968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.520975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.520994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.530900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.531041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.531064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.531071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.531077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.531094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.540921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.541062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.541081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.541088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.541094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.541111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.550957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.551093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.551111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.551123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.551129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.551147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.560949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.561090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.561108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.561116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.561122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.121 [2024-07-24 17:54:42.561139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.121 qpair failed and we were unable to recover it. 00:29:21.121 [2024-07-24 17:54:42.570973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.121 [2024-07-24 17:54:42.571113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.121 [2024-07-24 17:54:42.571131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.121 [2024-07-24 17:54:42.571138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.121 [2024-07-24 17:54:42.571144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.571161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.581004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.581140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.581158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.581165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.581171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.581188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.591111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.591245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.591262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.591269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.591275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.591292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.601160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.601300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.601317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.601323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.601329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.601345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.611209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.611359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.611377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.611383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.611390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.611407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.621215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.621350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.621367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.621374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.621380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.621396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.631147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.631279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.631297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.631304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.631310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.631326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.641280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.641413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.641435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.641442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.641448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.641465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.651309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.651445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.651462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.651469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.651475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.651493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.661242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.661375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.661392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.661399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.661405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.661422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.671266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.671397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.671414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.671421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.671427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.671444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.681350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.681530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.681548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.681555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.681561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.681585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.691375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.691504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.691521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.691528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.691534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.122 [2024-07-24 17:54:42.691552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.122 qpair failed and we were unable to recover it. 00:29:21.122 [2024-07-24 17:54:42.701435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.122 [2024-07-24 17:54:42.701567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.122 [2024-07-24 17:54:42.701585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.122 [2024-07-24 17:54:42.701592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.122 [2024-07-24 17:54:42.701598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.123 [2024-07-24 17:54:42.701614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.123 qpair failed and we were unable to recover it. 00:29:21.123 [2024-07-24 17:54:42.711459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.123 [2024-07-24 17:54:42.711598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.123 [2024-07-24 17:54:42.711616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.123 [2024-07-24 17:54:42.711623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.123 [2024-07-24 17:54:42.711629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.123 [2024-07-24 17:54:42.711646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.123 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.721491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.721625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.721643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.721650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.721656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.721673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.731527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.731657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.731678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.731685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.731691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.731708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.741536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.741669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.741687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.741694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.741700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.741716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.751577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.751715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.751733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.751740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.751746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.751763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.761605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.761754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.761773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.761780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.761787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.761803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.771564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.771730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.771748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.771756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.771762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.771783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.781687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.383 [2024-07-24 17:54:42.781823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.383 [2024-07-24 17:54:42.781841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.383 [2024-07-24 17:54:42.781848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.383 [2024-07-24 17:54:42.781854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.383 [2024-07-24 17:54:42.781871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.383 qpair failed and we were unable to recover it. 00:29:21.383 [2024-07-24 17:54:42.791739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.791881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.791899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.791906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.791912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.791929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.801700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.801836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.801853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.801860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.801866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.801882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.811773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.811898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.811916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.811923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.811929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.811945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.821803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.821936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.821957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.821964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.821970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.821986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.831814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.831953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.831971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.831978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.831984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.832001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.841848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.841984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.842001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.842008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.842014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.842031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.851872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.852008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.852026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.852033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.852039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.852065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.861906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.862041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.862063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.862070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.862080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.862097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.871927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.872074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.872091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.872098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.872104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.872121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.881964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.882106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.882124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.882131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.882137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.882154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.891982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.892123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.892141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.892148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.892154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.892171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.902065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.902225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.902242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.902249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.902255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.902272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.384 [2024-07-24 17:54:42.912105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.384 [2024-07-24 17:54:42.912250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.384 [2024-07-24 17:54:42.912268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.384 [2024-07-24 17:54:42.912274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.384 [2024-07-24 17:54:42.912280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.384 [2024-07-24 17:54:42.912297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.384 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.922061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.922200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.922217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.922224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.922230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.922247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.932122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.932259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.932276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.932283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.932289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.932307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.942144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.942283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.942301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.942308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.942313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.942330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.952161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.952291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.952309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.952319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.952325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.952342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.962230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.962374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.962392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.962399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.962405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.962421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.385 [2024-07-24 17:54:42.972233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.385 [2024-07-24 17:54:42.972367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.385 [2024-07-24 17:54:42.972384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.385 [2024-07-24 17:54:42.972391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.385 [2024-07-24 17:54:42.972398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.385 [2024-07-24 17:54:42.972414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.385 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:42.982239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:42.982372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:42.982390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.646 [2024-07-24 17:54:42.982396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.646 [2024-07-24 17:54:42.982403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.646 [2024-07-24 17:54:42.982420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.646 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:42.992211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:42.992348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:42.992366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.646 [2024-07-24 17:54:42.992373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.646 [2024-07-24 17:54:42.992379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.646 [2024-07-24 17:54:42.992395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.646 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:43.002314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:43.002446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:43.002465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.646 [2024-07-24 17:54:43.002472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.646 [2024-07-24 17:54:43.002477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.646 [2024-07-24 17:54:43.002494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.646 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:43.012336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:43.012487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:43.012506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.646 [2024-07-24 17:54:43.012513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.646 [2024-07-24 17:54:43.012519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.646 [2024-07-24 17:54:43.012535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.646 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:43.022390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:43.022551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:43.022569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.646 [2024-07-24 17:54:43.022576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.646 [2024-07-24 17:54:43.022582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.646 [2024-07-24 17:54:43.022599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.646 qpair failed and we were unable to recover it. 00:29:21.646 [2024-07-24 17:54:43.032410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.646 [2024-07-24 17:54:43.032563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.646 [2024-07-24 17:54:43.032581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.032588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.032594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.032611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.042421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.042553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.042573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.042583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.042590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.042607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.052450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.052580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.052599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.052605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.052612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.052629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.062499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.062637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.062654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.062662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.062668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.062685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.072510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.072641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.072659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.072665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.072672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.072689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.082547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.082681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.082698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.082705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.082711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.082728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.092590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.092727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.092745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.092751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.092757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.092774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.102582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.102717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.102735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.102742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.102749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.102766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.112622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.112755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.112773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.112780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.112786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.112803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.122652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.122783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.122801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.122808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.122814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.122831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.132679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.132807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.132828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.132835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.132841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.132858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.142716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.142849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.142866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.142873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.142879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.142896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.152711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.152878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.152896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.152903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.152909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.647 [2024-07-24 17:54:43.152925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.647 qpair failed and we were unable to recover it. 00:29:21.647 [2024-07-24 17:54:43.162767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.647 [2024-07-24 17:54:43.162896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.647 [2024-07-24 17:54:43.162913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.647 [2024-07-24 17:54:43.162920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.647 [2024-07-24 17:54:43.162927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.162943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.172800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.172933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.172951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.172957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.172963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.172983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.182826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.182959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.182977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.182984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.182990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.183007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.192883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.193054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.193072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.193079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.193085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.193102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.202854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.202990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.203008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.203014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.203020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.203037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.212913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.213046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.213063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.213070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.213076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.213093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.222952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.223094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.223115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.223122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.223128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.223144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.648 [2024-07-24 17:54:43.232881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.648 [2024-07-24 17:54:43.233009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.648 [2024-07-24 17:54:43.233027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.648 [2024-07-24 17:54:43.233034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.648 [2024-07-24 17:54:43.233039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.648 [2024-07-24 17:54:43.233063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.648 qpair failed and we were unable to recover it. 00:29:21.909 [2024-07-24 17:54:43.242954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.909 [2024-07-24 17:54:43.243130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.909 [2024-07-24 17:54:43.243148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.909 [2024-07-24 17:54:43.243155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.909 [2024-07-24 17:54:43.243162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.909 [2024-07-24 17:54:43.243178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.909 qpair failed and we were unable to recover it. 00:29:21.909 [2024-07-24 17:54:43.253036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.909 [2024-07-24 17:54:43.253175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.909 [2024-07-24 17:54:43.253193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.909 [2024-07-24 17:54:43.253200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.909 [2024-07-24 17:54:43.253206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.909 [2024-07-24 17:54:43.253223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.909 qpair failed and we were unable to recover it. 00:29:21.909 [2024-07-24 17:54:43.262991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.909 [2024-07-24 17:54:43.263135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.909 [2024-07-24 17:54:43.263153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.909 [2024-07-24 17:54:43.263160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.909 [2024-07-24 17:54:43.263165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.909 [2024-07-24 17:54:43.263186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.909 qpair failed and we were unable to recover it. 00:29:21.909 [2024-07-24 17:54:43.273103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.909 [2024-07-24 17:54:43.273241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.909 [2024-07-24 17:54:43.273259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.909 [2024-07-24 17:54:43.273265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.909 [2024-07-24 17:54:43.273271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.909 [2024-07-24 17:54:43.273288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.909 qpair failed and we were unable to recover it. 00:29:21.909 [2024-07-24 17:54:43.283105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.909 [2024-07-24 17:54:43.283239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.283257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.283263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.283269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.283286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.293139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.293267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.293283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.293291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.293297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.293314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.303169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.303303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.303321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.303328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.303334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.303351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.313184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.313322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.313342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.313349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.313355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.313371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.323239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.323376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.323393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.323400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.323407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.323423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.333264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.333398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.333415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.333422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.333428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.333445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.343298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.343433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.343450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.343457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.343463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.343480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.353315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.353453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.353470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.353477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.353486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.353503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.363352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.363488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.363506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.363512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.363518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.363535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.373402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.373532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.373549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.373556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.373562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.373579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.383405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.383539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.383556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.383563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.383569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.383586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.393437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.393574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.393592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.393599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.393605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.393622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.403454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.403592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.403610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.403617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.910 [2024-07-24 17:54:43.403623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.910 [2024-07-24 17:54:43.403640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.910 qpair failed and we were unable to recover it. 00:29:21.910 [2024-07-24 17:54:43.413494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.910 [2024-07-24 17:54:43.413633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.910 [2024-07-24 17:54:43.413651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.910 [2024-07-24 17:54:43.413658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.413664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.413681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.423557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.423690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.423707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.423714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.423720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.423737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.433550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.433681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.433699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.433705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.433711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.433728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.443576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.443710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.443727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.443734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.443747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.443768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.453608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.453743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.453761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.453768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.453774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.453791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.463650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.463791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.463808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.463816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.463822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.463838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.473676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.473813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.473831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.473838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.473844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.473861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.483695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.483827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.483845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.483852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.483858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:21.911 [2024-07-24 17:54:43.483875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.493807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.493998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.494028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.494039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.494059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:21.911 [2024-07-24 17:54:43.494084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.911 qpair failed and we were unable to recover it. 00:29:21.911 [2024-07-24 17:54:43.503760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.911 [2024-07-24 17:54:43.503900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.911 [2024-07-24 17:54:43.503919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.911 [2024-07-24 17:54:43.503927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.911 [2024-07-24 17:54:43.503933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:21.911 [2024-07-24 17:54:43.503949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.911 qpair failed and we were unable to recover it. 00:29:22.172 [2024-07-24 17:54:43.513710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.513859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.513878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.513886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.513892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.513910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.523857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.524017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.524036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.524048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.524055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.524071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.533833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.533961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.533980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.533991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.533997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.534014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.543832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.543968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.543987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.543994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.544000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.544016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.553867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.554191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.554210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.554216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.554223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.554239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.563921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.564063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.564082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.564089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.564095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.564112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.573881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.574022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.574040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.574053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.574059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.574077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.583959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.584101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.584121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.584128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.584134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.584151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.594057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.594189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.594208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.594215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.594221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.594237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.604037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.604175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.604192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.604199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.604205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.604222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.614055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.614184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.614203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.614210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.614217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.614233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.624109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.624246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.624264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.624275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.624282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.624298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.634177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.634315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.634334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.634341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.634348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.173 [2024-07-24 17:54:43.634364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.173 qpair failed and we were unable to recover it. 00:29:22.173 [2024-07-24 17:54:43.644156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.173 [2024-07-24 17:54:43.644305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.173 [2024-07-24 17:54:43.644323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.173 [2024-07-24 17:54:43.644330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.173 [2024-07-24 17:54:43.644336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.644352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.654100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.654233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.654251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.654257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.654263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.654280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.664150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.664285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.664304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.664311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.664317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.664333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.674217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.674355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.674374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.674381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.674387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.674403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.684243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.684374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.684392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.684399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.684405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.684421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.694278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.694409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.694428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.694435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.694440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.694457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.704314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.704453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.704472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.704478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.704485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.704501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.714344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.714483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.714502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.714512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.714518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.714534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.724443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.724584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.724603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.724609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.724615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.724632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.734427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.734562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.734581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.734588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.734594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.734609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.744572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.744714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.744732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.744739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.744746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.744762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.754525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.754663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.754681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.754688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.754694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.754710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.174 [2024-07-24 17:54:43.764581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.174 [2024-07-24 17:54:43.764721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.174 [2024-07-24 17:54:43.764740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.174 [2024-07-24 17:54:43.764747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.174 [2024-07-24 17:54:43.764753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.174 [2024-07-24 17:54:43.764770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.174 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.774609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.774789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.774807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.774815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.774821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.774836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.784501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.784646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.784664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.784671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.784677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.784693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.794512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.794647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.794666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.794673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.794679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.794697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.804545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.804687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.804705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.804716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.804722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.804739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.814616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.814793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.814812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.814819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.814825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.814841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.824704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.824847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.824866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.824873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.824879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.824896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.834715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.834848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.834867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.834874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.834880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.834895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.844713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.844854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.844874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.844881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.844887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.844903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.854800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.854963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.854982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.854988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.854995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.855011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.864721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.436 [2024-07-24 17:54:43.864854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.436 [2024-07-24 17:54:43.864873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.436 [2024-07-24 17:54:43.864879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.436 [2024-07-24 17:54:43.864886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.436 [2024-07-24 17:54:43.864902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-07-24 17:54:43.874738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.874875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.874894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.874901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.874907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.874923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.884848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.884985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.885004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.885011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.885017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.885033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.894918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.895054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.895076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.895083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.895089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.895106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.904900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.905036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.905060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.905068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.905074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.905090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.914851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.914988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.915006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.915013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.915019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.915036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.924890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.925031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.925058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.925065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.925070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.925087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.934923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.935071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.935090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.935097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.935102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.935119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.944948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.945131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.945150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.945156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.945162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.945178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.955040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.955177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.955194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.955201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.955207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.955223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.965091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.965323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.965340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.965347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.965353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.965369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.975099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.975236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.975254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.975261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.975267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.975283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.985064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.985201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.985223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.985229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.985235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.985252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:43.995123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:43.995266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.437 [2024-07-24 17:54:43.995285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.437 [2024-07-24 17:54:43.995291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.437 [2024-07-24 17:54:43.995298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.437 [2024-07-24 17:54:43.995314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-07-24 17:54:44.005165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.437 [2024-07-24 17:54:44.005300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.438 [2024-07-24 17:54:44.005319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.438 [2024-07-24 17:54:44.005326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.438 [2024-07-24 17:54:44.005332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.438 [2024-07-24 17:54:44.005348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-07-24 17:54:44.015180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.438 [2024-07-24 17:54:44.015314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.438 [2024-07-24 17:54:44.015332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.438 [2024-07-24 17:54:44.015339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.438 [2024-07-24 17:54:44.015345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.438 [2024-07-24 17:54:44.015361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-07-24 17:54:44.025182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.438 [2024-07-24 17:54:44.025324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.438 [2024-07-24 17:54:44.025343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.438 [2024-07-24 17:54:44.025350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.438 [2024-07-24 17:54:44.025356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.438 [2024-07-24 17:54:44.025375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.035215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.035356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.035375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.035382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.035388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.035404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.045231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.045536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.045554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.045561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.045567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.045583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.055373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.055505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.055522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.055529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.055535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.055551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.065376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.065695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.065713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.065719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.065725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.065741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.075319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.075457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.075481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.075488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.075494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.075510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.085412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.085547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.085565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.085572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.085579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.085596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.095457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.095594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.095613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.095620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.095626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.095642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.105412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.105545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.105563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.105570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.105576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.105593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.115509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.115658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.115677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.115684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.115691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.115711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.125509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.125646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.125665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.125671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.125677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.125693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.135496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.135633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.135651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.135659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.135665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.135681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.145586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.699 [2024-07-24 17:54:44.145724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.699 [2024-07-24 17:54:44.145742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.699 [2024-07-24 17:54:44.145749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.699 [2024-07-24 17:54:44.145755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.699 [2024-07-24 17:54:44.145771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.699 qpair failed and we were unable to recover it. 00:29:22.699 [2024-07-24 17:54:44.155537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.155673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.155690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.155698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.155704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.155719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.165650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.165784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.165807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.165814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.165819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.165835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.175619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.175754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.175772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.175779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.175785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.175802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.185701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.185836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.185854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.185861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.185866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.185883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.195685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.195822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.195841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.195848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.195854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.195870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.205768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.205904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.205923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.205930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.205936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.205956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.215717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.215864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.215883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.215890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.215896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.215913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.225818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.225955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.225974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.225981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.225987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.226003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.235785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.235918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.235937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.235945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.235951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.235967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.245855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.245985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.246004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.246012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.246017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.246034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.255897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.256036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.256065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.256073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.256079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.256095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.265956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.266100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.266120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.266127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.266133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.266149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.275966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.276104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.700 [2024-07-24 17:54:44.276122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.700 [2024-07-24 17:54:44.276129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.700 [2024-07-24 17:54:44.276135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.700 [2024-07-24 17:54:44.276151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.700 qpair failed and we were unable to recover it. 00:29:22.700 [2024-07-24 17:54:44.285998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.700 [2024-07-24 17:54:44.286140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.701 [2024-07-24 17:54:44.286159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.701 [2024-07-24 17:54:44.286166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.701 [2024-07-24 17:54:44.286172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.701 [2024-07-24 17:54:44.286188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.701 qpair failed and we were unable to recover it. 00:29:22.961 [2024-07-24 17:54:44.296035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.961 [2024-07-24 17:54:44.296179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.961 [2024-07-24 17:54:44.296197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.961 [2024-07-24 17:54:44.296204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.296215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.296231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.306069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.306205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.306224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.306230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.306236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.306253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.316104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.316240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.316259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.316265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.316271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.316288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.326129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.326260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.326279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.326286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.326292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.326307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.336140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.336272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.336291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.336298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.336304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.336320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.346185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.346321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.346343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.346349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.346356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.346371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.356209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.356348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.356366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.356373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.356379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.356395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.366248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.366383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.366401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.366408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.366414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.366430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.376281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.376416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.376434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.376441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.376447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.376463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.386251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.386398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.386416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.386423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.386432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.386448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.396339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.396471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.396490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.396497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.396503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.396519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.406352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.406484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.406502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.406509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.406515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.406531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.416342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.416481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.416501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.416508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.416514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.962 [2024-07-24 17:54:44.416530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.962 qpair failed and we were unable to recover it. 00:29:22.962 [2024-07-24 17:54:44.426352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.962 [2024-07-24 17:54:44.426488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.962 [2024-07-24 17:54:44.426507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.962 [2024-07-24 17:54:44.426514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.962 [2024-07-24 17:54:44.426519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.426536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.436455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.436591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.436610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.436617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.436622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.436638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.446487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.446622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.446641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.446647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.446653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.446669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.456521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.456652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.456670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.456676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.456682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.456698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.466557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.466692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.466710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.466717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.466723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.466739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.476547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.476685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.476703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.476710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.476720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.476735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.486799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.486940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.486958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.486965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.486971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.486987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.496637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.496763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.496782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.496789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.496795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.496811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.506672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.506809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.506828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.506834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.506840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.506856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.516721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.516886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.516905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.516912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.516917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.516934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.526719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.526855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.526874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.526881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.526887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.526903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.536747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.536875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.536893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.536900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.536906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.536922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.546780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.546913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.546931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.546938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.546944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.546960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:22.963 [2024-07-24 17:54:44.556800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.963 [2024-07-24 17:54:44.556936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.963 [2024-07-24 17:54:44.556954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.963 [2024-07-24 17:54:44.556961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.963 [2024-07-24 17:54:44.556967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:22.963 [2024-07-24 17:54:44.556983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.963 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.566828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.566961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.566980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.566987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.566996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.567012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.576860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.576996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.577014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.577020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.577026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.577047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.586881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.587021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.587039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.587051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.587057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.587074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.596906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.597047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.597065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.597072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.597078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.597095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.606951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.607093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.607110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.607117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.607123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.607139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.616977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.617124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.617143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.617150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.617156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.617172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.627000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.627142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.627161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.627168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.627174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.627190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.637041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.637187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.637205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.637212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.637218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.637234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.647184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.647316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.647334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.647341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.647347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.647363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.657094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.657228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.657246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.657256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.657263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.657279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.667134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.667271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.667289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.667296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.667303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.667320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.677153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.677284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.677302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.677309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.225 [2024-07-24 17:54:44.677316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.225 [2024-07-24 17:54:44.677332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-24 17:54:44.687151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.225 [2024-07-24 17:54:44.687283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.225 [2024-07-24 17:54:44.687302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.225 [2024-07-24 17:54:44.687309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.687315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.687331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.697133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.697268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.697287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.697294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.697300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.697316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.707244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.707381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.707400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.707407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.707414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.707430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.717187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.717324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.717343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.717350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.717356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.717373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.727208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.727346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.727364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.727371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.727378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.727394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.737291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.737424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.737443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.737450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.737456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.737472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.747341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.747477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.747495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.747506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.747512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.747530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.757349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.757488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.757507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.757513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.757519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.757535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.767434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.767567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.767585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.767592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.767597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.767614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.777355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.777485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.777503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.777510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.777516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.777532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.787641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.787772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.787790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.787797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.787803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.787818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.797414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.797580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.797598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.797605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.797611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.797627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.807521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.807655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.807673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.807679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.807685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.807701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-24 17:54:44.817565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.226 [2024-07-24 17:54:44.817699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.226 [2024-07-24 17:54:44.817718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.226 [2024-07-24 17:54:44.817725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.226 [2024-07-24 17:54:44.817730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.226 [2024-07-24 17:54:44.817747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.827509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.827643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.827662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.827670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.827676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.827693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.837589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.837723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.837742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.837753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.837760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.837775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.847625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.847762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.847780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.847786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.847792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.847809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.857638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.857775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.857793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.857800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.857805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.857822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.867741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.867899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.867917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.867924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.867930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.867946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.877713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.877853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.877871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.877877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.877883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.877899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.887737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.887868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.887886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.887893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.887899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.887915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.897772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.897930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.488 [2024-07-24 17:54:44.897948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.488 [2024-07-24 17:54:44.897955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.488 [2024-07-24 17:54:44.897961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.488 [2024-07-24 17:54:44.897977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.488 qpair failed and we were unable to recover it. 00:29:23.488 [2024-07-24 17:54:44.907820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.488 [2024-07-24 17:54:44.907956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.907974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.907981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.907987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.908003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.917823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.917957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.917976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.917983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.917989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.918005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.927861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.928025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.928049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.928060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.928065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.928081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.937897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.938029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.938053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.938060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.938066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.938083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.947931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.948113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.948132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.948139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.948145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.948161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.957954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.958100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.958118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.958125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.958131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.958147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.967995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.968135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.968153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.968159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.968165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.968181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.978022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.978162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.978181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.978188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.978194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.978210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.987986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.988130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.988149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.988156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.988162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.988178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:44.998093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:44.998225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:44.998244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:44.998250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:44.998257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:44.998273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:45.008096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:45.008230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:45.008248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:45.008255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:45.008261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:45.008277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:45.018126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:45.018259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:45.018281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:45.018288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:45.018295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:45.018311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:45.028170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:45.028305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:45.028323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:45.028331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.489 [2024-07-24 17:54:45.028336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.489 [2024-07-24 17:54:45.028353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.489 qpair failed and we were unable to recover it. 00:29:23.489 [2024-07-24 17:54:45.038197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.489 [2024-07-24 17:54:45.038329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.489 [2024-07-24 17:54:45.038348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.489 [2024-07-24 17:54:45.038355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.490 [2024-07-24 17:54:45.038361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.490 [2024-07-24 17:54:45.038378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.490 qpair failed and we were unable to recover it. 00:29:23.490 [2024-07-24 17:54:45.048232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.490 [2024-07-24 17:54:45.048373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.490 [2024-07-24 17:54:45.048391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.490 [2024-07-24 17:54:45.048398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.490 [2024-07-24 17:54:45.048405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.490 [2024-07-24 17:54:45.048421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.490 qpair failed and we were unable to recover it. 00:29:23.490 [2024-07-24 17:54:45.058256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.490 [2024-07-24 17:54:45.058391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.490 [2024-07-24 17:54:45.058409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.490 [2024-07-24 17:54:45.058416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.490 [2024-07-24 17:54:45.058422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.490 [2024-07-24 17:54:45.058439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.490 qpair failed and we were unable to recover it. 00:29:23.490 [2024-07-24 17:54:45.068252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.490 [2024-07-24 17:54:45.068389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.490 [2024-07-24 17:54:45.068407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.490 [2024-07-24 17:54:45.068414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.490 [2024-07-24 17:54:45.068420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.490 [2024-07-24 17:54:45.068436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.490 qpair failed and we were unable to recover it. 00:29:23.490 [2024-07-24 17:54:45.078341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.490 [2024-07-24 17:54:45.078478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.490 [2024-07-24 17:54:45.078496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.490 [2024-07-24 17:54:45.078503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.490 [2024-07-24 17:54:45.078509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.490 [2024-07-24 17:54:45.078525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.490 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.088344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.088480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.088498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.088505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.088511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.088527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.098367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.098515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.098534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.098541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.098547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.098563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.108326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.108460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.108482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.108489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.108495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.108512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.118415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.118558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.118577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.118583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.118589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.118606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.128450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.128584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.128602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.128609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.128615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.128631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.138490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.138620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.138638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.138645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.138651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.138668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.148527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.148661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.148679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.148686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.148692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.148708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.158534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.158665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.158684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.158691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.158697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.158712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.168613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.168765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.168783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.168790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.168796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.168812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.178603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.178740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.178759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.178765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.178771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.178787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.188655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.188788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.188806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.188813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.188819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.188835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.198663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.198799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.198820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.198826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.752 [2024-07-24 17:54:45.198832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.752 [2024-07-24 17:54:45.198848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.752 qpair failed and we were unable to recover it. 00:29:23.752 [2024-07-24 17:54:45.208691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.752 [2024-07-24 17:54:45.208822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.752 [2024-07-24 17:54:45.208841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.752 [2024-07-24 17:54:45.208847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.208853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.208869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.218731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.218871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.218890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.218897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.218903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.218919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.228770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.228923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.228942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.228949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.228955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.228971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.238805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.238942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.238961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.238968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.238974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.238994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.248817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.248952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.248971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.248977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.248984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.249001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.258852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.258985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.259011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.259019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.259025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.259041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.268894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.269188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.269208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.269215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.269221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.269238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.278910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.279053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.279072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.279079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.279085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.279101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.288921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.289079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.289102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.289109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.289114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.289132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.298907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.299041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.299066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.299073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.299079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.299096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.309028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.309167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.309186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.309193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.309199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.309215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.319076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.319220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.319238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.319245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.319251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.319268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.329065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.329198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.329216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.329223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.329229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.329249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:23.753 [2024-07-24 17:54:45.339112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.753 [2024-07-24 17:54:45.339254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.753 [2024-07-24 17:54:45.339272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.753 [2024-07-24 17:54:45.339279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.753 [2024-07-24 17:54:45.339285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:23.753 [2024-07-24 17:54:45.339301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.753 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.349131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.349268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.349286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.349293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.349299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.349316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.359153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.359290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.359308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.359315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.359321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.359337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.369200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.369330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.369349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.369356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.369362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.369378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.379181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.379310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.379332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.379339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.379345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.379361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.389188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.389323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.389342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.389348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.389354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.389371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.399268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.399400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.399418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.399425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.399431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.399447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.409223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.409368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.409386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.409393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.409399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.409415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.419330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.419466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.419485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.419492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.419498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.419519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.429544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.429679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.429698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.429705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.015 [2024-07-24 17:54:45.429711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.015 [2024-07-24 17:54:45.429728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.015 qpair failed and we were unable to recover it. 00:29:24.015 [2024-07-24 17:54:45.439380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.015 [2024-07-24 17:54:45.439523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.015 [2024-07-24 17:54:45.439541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.015 [2024-07-24 17:54:45.439548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.439554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.439570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.449326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.449460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.449479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.449486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.449492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.449508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.459406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.459542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.459560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.459567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.459573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.459590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.469412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.469551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.469574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.469580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.469586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.469602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.479463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.479600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.479619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.479626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.479632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.479648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.489532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.489665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.489684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.489691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.489697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.489713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.499571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.499729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.499748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.499755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.499761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.499778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.509595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.509728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.509746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.509753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.509763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.509779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.519618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.519793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.519811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.519818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.519824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.519841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.529577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.529713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.529732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.529739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.529745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.529761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.539685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.539822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.539840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.539847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.539853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.539870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.549684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.549820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.549839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.549846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.549852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.549868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.559660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.559794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.559819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.559825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.559831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.559847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.016 qpair failed and we were unable to recover it. 00:29:24.016 [2024-07-24 17:54:45.569799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.016 [2024-07-24 17:54:45.569936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.016 [2024-07-24 17:54:45.569954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.016 [2024-07-24 17:54:45.569961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.016 [2024-07-24 17:54:45.569967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.016 [2024-07-24 17:54:45.569984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.017 qpair failed and we were unable to recover it. 00:29:24.017 [2024-07-24 17:54:45.579814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.017 [2024-07-24 17:54:45.579942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.017 [2024-07-24 17:54:45.579960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.017 [2024-07-24 17:54:45.579967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.017 [2024-07-24 17:54:45.579973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.017 [2024-07-24 17:54:45.579990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.017 qpair failed and we were unable to recover it. 00:29:24.017 [2024-07-24 17:54:45.589825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.017 [2024-07-24 17:54:45.589961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.017 [2024-07-24 17:54:45.589979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.017 [2024-07-24 17:54:45.589986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.017 [2024-07-24 17:54:45.589992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.017 [2024-07-24 17:54:45.590008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.017 qpair failed and we were unable to recover it. 00:29:24.017 [2024-07-24 17:54:45.599781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.017 [2024-07-24 17:54:45.599939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.017 [2024-07-24 17:54:45.599956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.017 [2024-07-24 17:54:45.599963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.017 [2024-07-24 17:54:45.599973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.017 [2024-07-24 17:54:45.599989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.017 qpair failed and we were unable to recover it. 00:29:24.017 [2024-07-24 17:54:45.609865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.017 [2024-07-24 17:54:45.609997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.017 [2024-07-24 17:54:45.610014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.017 [2024-07-24 17:54:45.610021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.017 [2024-07-24 17:54:45.610026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.017 [2024-07-24 17:54:45.610050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.017 qpair failed and we were unable to recover it. 00:29:24.278 [2024-07-24 17:54:45.619901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.278 [2024-07-24 17:54:45.620040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.278 [2024-07-24 17:54:45.620063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.278 [2024-07-24 17:54:45.620070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.278 [2024-07-24 17:54:45.620077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.278 [2024-07-24 17:54:45.620094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.278 qpair failed and we were unable to recover it. 00:29:24.278 [2024-07-24 17:54:45.630060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.278 [2024-07-24 17:54:45.630197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.278 [2024-07-24 17:54:45.630215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.278 [2024-07-24 17:54:45.630222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.630228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.630245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.639942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.640109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.640128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.640135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.640141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.640157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.649991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.650135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.650153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.650160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.650166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.650181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.659987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.660122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.660141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.660148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.660154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.660170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.670037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.670180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.670198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.670205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.670211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.670228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.680080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.680219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.680238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.680244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.680250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.680267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.690114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.690253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.690272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.690279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.690288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.690304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.700146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.700281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.700299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.700306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.700312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.700328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.710199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.710333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.710351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.710357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.710363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.710379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.720205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.720339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.720358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.720365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.720372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.720388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.730177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.730315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.730333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.730340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.730347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.730363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.740253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.740391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.740409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.740416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.740422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.279 [2024-07-24 17:54:45.740438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.279 qpair failed and we were unable to recover it. 00:29:24.279 [2024-07-24 17:54:45.750346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.279 [2024-07-24 17:54:45.750478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.279 [2024-07-24 17:54:45.750497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.279 [2024-07-24 17:54:45.750504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.279 [2024-07-24 17:54:45.750510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.750526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.760335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.760475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.760493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.760500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.760506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.760522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.770340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.770474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.770492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.770499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.770505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.770521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.780315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.780453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.780471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.780477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.780487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.780503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.790421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.790556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.790575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.790581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.790588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.790604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.800450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.800598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.800616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.800624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.800630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.800647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.810393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.810530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.810547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.810555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.810561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.810577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.820522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.820664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.820683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.820690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.820696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.820712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.830546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.830684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.830702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.830709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.830715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.830731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.840544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.840677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.840696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.840703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.840709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.840725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.850621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.850782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.850800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.850807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.850813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.850829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.860634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.860786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.860804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.860811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.860817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.860833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.280 [2024-07-24 17:54:45.870591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.280 [2024-07-24 17:54:45.870726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.280 [2024-07-24 17:54:45.870744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.280 [2024-07-24 17:54:45.870755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.280 [2024-07-24 17:54:45.870761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.280 [2024-07-24 17:54:45.870778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.280 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.880681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.881030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.881055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.881061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.881068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.881084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.890710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.890837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.890855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.890862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.890868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.890885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.900749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.900883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.900901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.900908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.900914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.900930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.910696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.910842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.910861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.910868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.910874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.910890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.920790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.920929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.920949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.920955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.920962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.920978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.930837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.930972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.930991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.930998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.931004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.931020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.940859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.940998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.941016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.941023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.941029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.941053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.950817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.950960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.950978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.950985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.950991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.951007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.960942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.961118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.961136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.961146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.961152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.961169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.970917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.971055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.971073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.971080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.971086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.971103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.980960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.981096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.981114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.981121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.981128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.981144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:45.991006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:45.991321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:45.991339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:45.991346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.542 [2024-07-24 17:54:45.991352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.542 [2024-07-24 17:54:45.991368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-07-24 17:54:46.000979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.542 [2024-07-24 17:54:46.001112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.542 [2024-07-24 17:54:46.001131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.542 [2024-07-24 17:54:46.001138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.001144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.001161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.011057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.011192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.011210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.011217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.011223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.011239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.021027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.021166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.021185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.021192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.021198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.021214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.031156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.031294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.031312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.031320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.031326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.031341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.041080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.041211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.041230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.041236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.041242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.041258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.051190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.051318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.051336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.051347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.051353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.051369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.061209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.061345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.061363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.061370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.061376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.061392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.071283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.071462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.071480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.071487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.071494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.071510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.081254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.081387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.081406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.081413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.081419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.081436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.091300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.091431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.091450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.091457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.091463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.091479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.101321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.101453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.101471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.101478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.101484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.101500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.111366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.111498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.111516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.111523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.111529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.111545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.121390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.121524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.121543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.121550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.121556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.121573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-07-24 17:54:46.131424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.543 [2024-07-24 17:54:46.131558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.543 [2024-07-24 17:54:46.131577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.543 [2024-07-24 17:54:46.131583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.543 [2024-07-24 17:54:46.131590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.543 [2024-07-24 17:54:46.131606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.141438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.141570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.141588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.141599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.141605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.141621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.151476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.151613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.151631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.151638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.151644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.151660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.161481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.161616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.161635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.161641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.161647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.161664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.171544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.171689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.171707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.171714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.171721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.171736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.181492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.181630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.181648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.181655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.181661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.181677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.191607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.191743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.191761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.191768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.191774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.191791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.201648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.201782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.201800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.201807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.201813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.201830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.211595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.211736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.211754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.211761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.211767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.211783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.221700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.221833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.221852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.221859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.221865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.221881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.231710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.231849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.231871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.231878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.231884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.231900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.241737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.241872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.805 [2024-07-24 17:54:46.241891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.805 [2024-07-24 17:54:46.241897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.805 [2024-07-24 17:54:46.241903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.805 [2024-07-24 17:54:46.241920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.805 qpair failed and we were unable to recover it. 00:29:24.805 [2024-07-24 17:54:46.251770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.805 [2024-07-24 17:54:46.251903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.251922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.251929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.251935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.251951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.261807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.261943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.261961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.261968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.261974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.261990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.271819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.271982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.272001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.272008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.272014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.272031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.281832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.281971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.281990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.281996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.282002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.282018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.291910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.292092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.292111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.292117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.292124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.292140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.301918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.302059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.302078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.302085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.302091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.302107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.311881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.312018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.312037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.312049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.312055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.312071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.321906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.322040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.322067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.322074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.322080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.322096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.332021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.332154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.332172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.332179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.332185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.332201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.342032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.342171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.342190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.342197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.342202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.342218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.352072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.352212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.352230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.352237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.352243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.352258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.362107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.362245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.362263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.362270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.362276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.362294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.372115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.372250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.372268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.372275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.806 [2024-07-24 17:54:46.372281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.806 [2024-07-24 17:54:46.372297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.806 qpair failed and we were unable to recover it. 00:29:24.806 [2024-07-24 17:54:46.382160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.806 [2024-07-24 17:54:46.382290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.806 [2024-07-24 17:54:46.382309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.806 [2024-07-24 17:54:46.382316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.807 [2024-07-24 17:54:46.382322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.807 [2024-07-24 17:54:46.382338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.807 qpair failed and we were unable to recover it. 00:29:24.807 [2024-07-24 17:54:46.392244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.807 [2024-07-24 17:54:46.392394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.807 [2024-07-24 17:54:46.392413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.807 [2024-07-24 17:54:46.392420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.807 [2024-07-24 17:54:46.392425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:24.807 [2024-07-24 17:54:46.392442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.807 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.402206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.402342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.402361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.068 [2024-07-24 17:54:46.402368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.068 [2024-07-24 17:54:46.402375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.068 [2024-07-24 17:54:46.402392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.412254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.412415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.412436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.068 [2024-07-24 17:54:46.412443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.068 [2024-07-24 17:54:46.412449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.068 [2024-07-24 17:54:46.412466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.422275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.422414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.422433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.068 [2024-07-24 17:54:46.422439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.068 [2024-07-24 17:54:46.422446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.068 [2024-07-24 17:54:46.422462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.432339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.432477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.432495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.068 [2024-07-24 17:54:46.432502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.068 [2024-07-24 17:54:46.432508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.068 [2024-07-24 17:54:46.432524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.442331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.442466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.442485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.068 [2024-07-24 17:54:46.442491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.068 [2024-07-24 17:54:46.442497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.068 [2024-07-24 17:54:46.442513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-07-24 17:54:46.452349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.068 [2024-07-24 17:54:46.452480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.068 [2024-07-24 17:54:46.452498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.452505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.452511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.452531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.462391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.462527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.462545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.462552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.462558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.462574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.472428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.472561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.472579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.472586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.472592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.472608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.482457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.482592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.482610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.482617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.482623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.482639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.492490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.492631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.492650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.492657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.492663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.492680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.502573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.502703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.502724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.502731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.502737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.502753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.512546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.512682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.512701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.512707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.512713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.512730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.522580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.522714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.522733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.522740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.522746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.522762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.532600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.532729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.532747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.532754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.532760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.532777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.542601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.542735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.542753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.542760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.542766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.542787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.552665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.552800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.552818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.552825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.552831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.552847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.562683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.562818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.562836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.562843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.562849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.562865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.572680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.572993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.573012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.573018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.573024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.573039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-07-24 17:54:46.582773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.069 [2024-07-24 17:54:46.582942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.069 [2024-07-24 17:54:46.582960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.069 [2024-07-24 17:54:46.582967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.069 [2024-07-24 17:54:46.582973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.069 [2024-07-24 17:54:46.582989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.592777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.592910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.592932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.592939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.592945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.592961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.602795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.602932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.602950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.602957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.602962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.602978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.612836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.612999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.613016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.613022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.613028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.613050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.622793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.623125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.623144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.623150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.623156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.623172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.632897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.633030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.633053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.633060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.633066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.633085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.642927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.643069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.643088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.643095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.643101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.643117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.652981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.653123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.653141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.653148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.653154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.653170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-07-24 17:54:46.662902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.070 [2024-07-24 17:54:46.663038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.070 [2024-07-24 17:54:46.663063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.070 [2024-07-24 17:54:46.663069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.070 [2024-07-24 17:54:46.663075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.070 [2024-07-24 17:54:46.663092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.673012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.673148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.673167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.673173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.673180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.673196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.683038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.683178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.683200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.683207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.683212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.683228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.692984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.693133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.693152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.693159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.693165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.693181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.703024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.703163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.703183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.703190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.703195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.703212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.713156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.713305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.713323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.713330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.713336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.713353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.723177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-24 17:54:46.723312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-24 17:54:46.723331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-24 17:54:46.723338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-24 17:54:46.723347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.331 [2024-07-24 17:54:46.723365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-24 17:54:46.733127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.733266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.733284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.733291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.733297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.733314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.743195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.743330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.743349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.743357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.743363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.743379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.753243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.753384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.753402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.753409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.753415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.753431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.763449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.763587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.763605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.763611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.763618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.763634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.773234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.773371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.773390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.773396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.773402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.773419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.783312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.783453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.783471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.783478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.783484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.783500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.793382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.793516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.793534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.793541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.793547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2042710 00:29:25.332 [2024-07-24 17:54:46.793564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.803439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.803775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.803803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.803815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.803824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6f4000b90 00:29:25.332 [2024-07-24 17:54:46.803848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.813385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.813525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.813545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.813552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.813562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6f4000b90 00:29:25.332 [2024-07-24 17:54:46.813579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.823425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.823755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.823785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.823797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.823806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:25.332 [2024-07-24 17:54:46.823829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.833415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.833555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.833574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.833581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.833587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff704000b90 00:29:25.332 [2024-07-24 17:54:46.833605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.843459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.843601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.843624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.843633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.843639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6fc000b90 00:29:25.332 [2024-07-24 17:54:46.843659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.853481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.332 [2024-07-24 17:54:46.853616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.332 [2024-07-24 17:54:46.853635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.332 [2024-07-24 17:54:46.853642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.332 [2024-07-24 17:54:46.853648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6fc000b90 00:29:25.332 [2024-07-24 17:54:46.853665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.332 qpair failed and we were unable to recover it. 00:29:25.332 [2024-07-24 17:54:46.853732] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:25.332 A controller has encountered a failure and is being reset. 00:29:25.332 Controller properly reset. 00:29:25.332 Initializing NVMe Controllers 00:29:25.332 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:25.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:25.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:25.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:25.333 Initialization complete. Launching workers. 00:29:25.333 Starting thread on core 1 00:29:25.333 Starting thread on core 2 00:29:25.333 Starting thread on core 3 00:29:25.333 Starting thread on core 0 00:29:25.333 17:54:46 -- host/target_disconnect.sh@59 -- # sync 00:29:25.333 00:29:25.333 real 0m11.308s 00:29:25.333 user 0m20.452s 00:29:25.333 sys 0m4.269s 00:29:25.333 17:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.333 17:54:46 -- common/autotest_common.sh@10 -- # set +x 00:29:25.333 ************************************ 00:29:25.333 END TEST nvmf_target_disconnect_tc2 00:29:25.333 ************************************ 00:29:25.593 17:54:46 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:29:25.593 17:54:46 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:25.593 17:54:46 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:25.593 17:54:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:25.593 17:54:46 -- nvmf/common.sh@116 -- # sync 00:29:25.593 17:54:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:25.593 17:54:46 -- nvmf/common.sh@119 -- # set +e 00:29:25.593 17:54:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:25.593 17:54:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:25.593 rmmod nvme_tcp 00:29:25.593 rmmod nvme_fabrics 00:29:25.593 rmmod nvme_keyring 00:29:25.593 17:54:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.593 17:54:47 -- nvmf/common.sh@123 -- # set -e 00:29:25.593 17:54:47 -- nvmf/common.sh@124 -- # return 0 00:29:25.593 17:54:47 -- nvmf/common.sh@477 -- # '[' -n 784611 ']' 00:29:25.593 17:54:47 -- nvmf/common.sh@478 -- # killprocess 784611 00:29:25.593 17:54:47 -- common/autotest_common.sh@926 -- # '[' -z 784611 ']' 00:29:25.593 17:54:47 -- common/autotest_common.sh@930 -- # kill -0 784611 00:29:25.593 17:54:47 -- common/autotest_common.sh@931 -- # uname 00:29:25.593 17:54:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.593 17:54:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 784611 00:29:25.593 17:54:47 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:29:25.593 17:54:47 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:29:25.593 17:54:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 784611' 00:29:25.593 killing process with pid 784611 00:29:25.593 17:54:47 -- common/autotest_common.sh@945 -- # kill 784611 00:29:25.593 17:54:47 -- common/autotest_common.sh@950 -- # wait 784611 00:29:25.853 17:54:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.853 17:54:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.853 17:54:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.853 17:54:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.853 17:54:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.853 17:54:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.853 17:54:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.853 17:54:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.763 17:54:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:27.763 00:29:27.763 real 0m18.809s 00:29:27.763 user 0m47.494s 00:29:27.763 sys 0m8.232s 00:29:27.763 17:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.763 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:27.763 ************************************ 00:29:27.763 END TEST nvmf_target_disconnect 00:29:27.763 ************************************ 00:29:28.022 17:54:49 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:28.022 17:54:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:28.022 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 17:54:49 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:28.022 00:29:28.022 real 23m9.761s 00:29:28.022 user 62m35.718s 00:29:28.022 sys 5m45.891s 00:29:28.022 17:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.022 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 ************************************ 00:29:28.023 END TEST nvmf_tcp 00:29:28.023 ************************************ 00:29:28.023 17:54:49 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:29:28.023 17:54:49 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:28.023 17:54:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:28.023 17:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:28.023 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.023 ************************************ 00:29:28.023 START TEST spdkcli_nvmf_tcp 00:29:28.023 ************************************ 00:29:28.023 17:54:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:28.023 * Looking for test storage... 00:29:28.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:28.023 17:54:49 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:28.023 17:54:49 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.023 17:54:49 -- nvmf/common.sh@7 -- # uname -s 00:29:28.023 17:54:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.023 17:54:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.023 17:54:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.023 17:54:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.023 17:54:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.023 17:54:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.023 17:54:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.023 17:54:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.023 17:54:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.023 17:54:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.023 17:54:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:28.023 17:54:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:28.023 17:54:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.023 17:54:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.023 17:54:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.023 17:54:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.023 17:54:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.023 17:54:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.023 17:54:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.023 17:54:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.023 17:54:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.023 17:54:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.023 17:54:49 -- paths/export.sh@5 -- # export PATH 00:29:28.023 17:54:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.023 17:54:49 -- nvmf/common.sh@46 -- # : 0 00:29:28.023 17:54:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:28.023 17:54:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:28.023 17:54:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:28.023 17:54:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.023 17:54:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.023 17:54:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:28.023 17:54:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:28.023 17:54:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:28.023 17:54:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:28.023 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.023 17:54:49 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:28.023 17:54:49 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=786160 00:29:28.023 17:54:49 -- spdkcli/common.sh@34 -- # waitforlisten 786160 00:29:28.023 17:54:49 -- common/autotest_common.sh@819 -- # '[' -z 786160 ']' 00:29:28.023 17:54:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.023 17:54:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.023 17:54:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.023 17:54:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.023 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.023 17:54:49 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:28.023 [2024-07-24 17:54:49.606462] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:28.023 [2024-07-24 17:54:49.606574] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786160 ] 00:29:28.282 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.282 [2024-07-24 17:54:49.662191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.282 [2024-07-24 17:54:49.740364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:28.282 [2024-07-24 17:54:49.740515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.282 [2024-07-24 17:54:49.740517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.852 17:54:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:28.852 17:54:50 -- common/autotest_common.sh@852 -- # return 0 00:29:28.852 17:54:50 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:28.852 17:54:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:28.852 17:54:50 -- common/autotest_common.sh@10 -- # set +x 00:29:28.852 17:54:50 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:28.852 17:54:50 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:28.852 17:54:50 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:28.852 17:54:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:28.852 17:54:50 -- common/autotest_common.sh@10 -- # set +x 00:29:28.852 17:54:50 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:28.852 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:28.852 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:28.852 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:28.852 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:28.852 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:28.852 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:28.852 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:28.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:28.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:28.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:28.852 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:28.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:28.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:28.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:28.853 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:28.853 ' 00:29:29.423 [2024-07-24 17:54:50.762494] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:31.333 [2024-07-24 17:54:52.799600] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.712 [2024-07-24 17:54:53.975658] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:34.619 [2024-07-24 17:54:56.178649] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:36.563 [2024-07-24 17:54:58.096742] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:38.478 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:38.478 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:38.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:38.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:38.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:38.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:38.478 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:38.478 17:54:59 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:38.478 17:54:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.478 17:54:59 -- common/autotest_common.sh@10 -- # set +x 00:29:38.478 17:54:59 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:38.478 17:54:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:38.478 17:54:59 -- common/autotest_common.sh@10 -- # set +x 00:29:38.478 17:54:59 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:38.478 17:54:59 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:38.738 17:55:00 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:38.738 17:55:00 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:38.738 17:55:00 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:38.738 17:55:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.738 17:55:00 -- common/autotest_common.sh@10 -- # set +x 00:29:38.738 17:55:00 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:38.738 17:55:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:38.738 17:55:00 -- common/autotest_common.sh@10 -- # set +x 00:29:38.738 17:55:00 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:38.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:38.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:38.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:38.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:38.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:38.738 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:38.738 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:38.738 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:38.738 ' 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:44.016 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:44.016 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:44.016 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:44.016 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:44.016 17:55:05 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:44.016 17:55:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:44.016 17:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:44.016 17:55:05 -- spdkcli/nvmf.sh@90 -- # killprocess 786160 00:29:44.016 17:55:05 -- common/autotest_common.sh@926 -- # '[' -z 786160 ']' 00:29:44.016 17:55:05 -- common/autotest_common.sh@930 -- # kill -0 786160 00:29:44.016 17:55:05 -- common/autotest_common.sh@931 -- # uname 00:29:44.016 17:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:44.016 17:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 786160 00:29:44.016 17:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:44.016 17:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:44.016 17:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 786160' 00:29:44.016 killing process with pid 786160 00:29:44.016 17:55:05 -- common/autotest_common.sh@945 -- # kill 786160 00:29:44.016 [2024-07-24 17:55:05.181537] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:44.016 17:55:05 -- common/autotest_common.sh@950 -- # wait 786160 00:29:44.016 17:55:05 -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:44.016 17:55:05 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:44.016 17:55:05 -- spdkcli/common.sh@13 -- # '[' -n 786160 ']' 00:29:44.016 17:55:05 -- spdkcli/common.sh@14 -- # killprocess 786160 00:29:44.016 17:55:05 -- common/autotest_common.sh@926 -- # '[' -z 786160 ']' 00:29:44.016 17:55:05 -- common/autotest_common.sh@930 -- # kill -0 786160 00:29:44.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (786160) - No such process 00:29:44.016 17:55:05 -- common/autotest_common.sh@953 -- # echo 'Process with pid 786160 is not found' 00:29:44.016 Process with pid 786160 is not found 00:29:44.016 17:55:05 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:44.016 17:55:05 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:44.016 17:55:05 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:44.016 00:29:44.016 real 0m15.935s 00:29:44.016 user 0m33.136s 00:29:44.016 sys 0m0.684s 00:29:44.016 17:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.016 17:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:44.016 ************************************ 00:29:44.016 END TEST spdkcli_nvmf_tcp 00:29:44.016 ************************************ 00:29:44.016 17:55:05 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:44.016 17:55:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:44.016 17:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.016 17:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:44.016 ************************************ 00:29:44.016 START TEST nvmf_identify_passthru 00:29:44.016 ************************************ 00:29:44.016 17:55:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:44.016 * Looking for test storage... 00:29:44.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:44.016 17:55:05 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.016 17:55:05 -- nvmf/common.sh@7 -- # uname -s 00:29:44.016 17:55:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.016 17:55:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.016 17:55:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.016 17:55:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.016 17:55:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.016 17:55:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.016 17:55:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.016 17:55:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.016 17:55:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.016 17:55:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.016 17:55:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:44.016 17:55:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:44.016 17:55:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.016 17:55:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.016 17:55:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.016 17:55:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.016 17:55:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.016 17:55:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.016 17:55:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.016 17:55:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.016 17:55:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.016 17:55:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.016 17:55:05 -- paths/export.sh@5 -- # export PATH 00:29:44.016 17:55:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.016 17:55:05 -- nvmf/common.sh@46 -- # : 0 00:29:44.016 17:55:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:44.016 17:55:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:44.016 17:55:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:44.016 17:55:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.016 17:55:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.016 17:55:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:44.016 17:55:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:44.016 17:55:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:44.016 17:55:05 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.016 17:55:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.016 17:55:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.016 17:55:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.016 17:55:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.017 17:55:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.017 17:55:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.017 17:55:05 -- paths/export.sh@5 -- # export PATH 00:29:44.017 17:55:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.017 17:55:05 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:44.017 17:55:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:44.017 17:55:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.017 17:55:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:44.017 17:55:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:44.017 17:55:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:44.017 17:55:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.017 17:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:44.017 17:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.017 17:55:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:44.017 17:55:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:44.017 17:55:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:44.017 17:55:05 -- common/autotest_common.sh@10 -- # set +x 00:29:49.292 17:55:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:49.292 17:55:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:49.292 17:55:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:49.292 17:55:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:49.292 17:55:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:49.292 17:55:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:49.292 17:55:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:49.292 17:55:10 -- nvmf/common.sh@294 -- # net_devs=() 00:29:49.292 17:55:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:49.292 17:55:10 -- nvmf/common.sh@295 -- # e810=() 00:29:49.292 17:55:10 -- nvmf/common.sh@295 -- # local -ga e810 00:29:49.292 17:55:10 -- nvmf/common.sh@296 -- # x722=() 00:29:49.292 17:55:10 -- nvmf/common.sh@296 -- # local -ga x722 00:29:49.292 17:55:10 -- nvmf/common.sh@297 -- # mlx=() 00:29:49.292 17:55:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:49.292 17:55:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.292 17:55:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.292 17:55:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:49.292 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:49.292 17:55:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:49.292 17:55:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:49.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:49.292 17:55:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.292 17:55:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.292 17:55:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.292 17:55:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:49.292 Found net devices under 0000:86:00.0: cvl_0_0 00:29:49.292 17:55:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:49.292 17:55:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.292 17:55:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.292 17:55:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:49.292 Found net devices under 0000:86:00.1: cvl_0_1 00:29:49.292 17:55:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:49.292 17:55:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:49.292 17:55:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.292 17:55:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.292 17:55:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:49.292 17:55:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.292 17:55:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.292 17:55:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:49.292 17:55:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.292 17:55:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.292 17:55:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:49.292 17:55:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:49.292 17:55:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.292 17:55:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.292 17:55:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.292 17:55:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.292 17:55:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:49.292 17:55:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.292 17:55:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.292 17:55:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.292 17:55:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:49.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:29:49.292 00:29:49.292 --- 10.0.0.2 ping statistics --- 00:29:49.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.292 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:49.292 17:55:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:29:49.292 00:29:49.292 --- 10.0.0.1 ping statistics --- 00:29:49.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.292 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:49.292 17:55:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.292 17:55:10 -- nvmf/common.sh@410 -- # return 0 00:29:49.292 17:55:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:49.292 17:55:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.292 17:55:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:49.292 17:55:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.292 17:55:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:49.292 17:55:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:49.292 17:55:10 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:49.292 17:55:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:49.292 17:55:10 -- common/autotest_common.sh@10 -- # set +x 00:29:49.292 17:55:10 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:49.292 17:55:10 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:49.292 17:55:10 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:49.292 17:55:10 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:49.292 17:55:10 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:49.292 17:55:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:49.292 17:55:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:49.292 17:55:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:49.292 17:55:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:49.292 17:55:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:49.292 17:55:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:49.292 17:55:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:29:49.292 17:55:10 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:29:49.292 17:55:10 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:49.292 17:55:10 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:49.293 17:55:10 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:49.293 17:55:10 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:49.293 17:55:10 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:49.293 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.483 17:55:14 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:53.483 17:55:14 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:53.483 17:55:14 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:53.483 17:55:14 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:53.483 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.670 17:55:18 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:57.670 17:55:18 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:57.670 17:55:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:57.670 17:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:57.670 17:55:18 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:57.670 17:55:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:57.670 17:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:57.670 17:55:18 -- target/identify_passthru.sh@31 -- # nvmfpid=793259 00:29:57.670 17:55:18 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.670 17:55:18 -- target/identify_passthru.sh@35 -- # waitforlisten 793259 00:29:57.670 17:55:18 -- common/autotest_common.sh@819 -- # '[' -z 793259 ']' 00:29:57.670 17:55:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.670 17:55:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:57.670 17:55:18 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:57.670 17:55:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.670 17:55:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:57.670 17:55:18 -- common/autotest_common.sh@10 -- # set +x 00:29:57.670 [2024-07-24 17:55:18.995531] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:57.670 [2024-07-24 17:55:18.995575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.670 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.670 [2024-07-24 17:55:19.052013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.670 [2024-07-24 17:55:19.129722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:57.670 [2024-07-24 17:55:19.129829] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.670 [2024-07-24 17:55:19.129836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.670 [2024-07-24 17:55:19.129843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.670 [2024-07-24 17:55:19.129881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.670 [2024-07-24 17:55:19.129983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.670 [2024-07-24 17:55:19.129998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.670 [2024-07-24 17:55:19.130000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.239 17:55:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:58.239 17:55:19 -- common/autotest_common.sh@852 -- # return 0 00:29:58.239 17:55:19 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:58.239 17:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.239 17:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:58.239 INFO: Log level set to 20 00:29:58.239 INFO: Requests: 00:29:58.239 { 00:29:58.239 "jsonrpc": "2.0", 00:29:58.239 "method": "nvmf_set_config", 00:29:58.239 "id": 1, 00:29:58.239 "params": { 00:29:58.239 "admin_cmd_passthru": { 00:29:58.239 "identify_ctrlr": true 00:29:58.239 } 00:29:58.239 } 00:29:58.239 } 00:29:58.239 00:29:58.239 INFO: response: 00:29:58.239 { 00:29:58.239 "jsonrpc": "2.0", 00:29:58.239 "id": 1, 00:29:58.239 "result": true 00:29:58.239 } 00:29:58.239 00:29:58.239 17:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.239 17:55:19 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:58.239 17:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.239 17:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:58.239 INFO: Setting log level to 20 00:29:58.239 INFO: Setting log level to 20 00:29:58.239 INFO: Log level set to 20 00:29:58.239 INFO: Log level set to 20 00:29:58.239 INFO: Requests: 00:29:58.239 { 00:29:58.239 "jsonrpc": "2.0", 00:29:58.239 "method": "framework_start_init", 00:29:58.239 "id": 1 00:29:58.239 } 00:29:58.239 00:29:58.239 INFO: Requests: 00:29:58.239 { 00:29:58.239 "jsonrpc": "2.0", 00:29:58.239 "method": "framework_start_init", 00:29:58.239 "id": 1 00:29:58.239 } 00:29:58.239 00:29:58.498 [2024-07-24 17:55:19.885920] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:58.498 INFO: response: 00:29:58.498 { 00:29:58.498 "jsonrpc": "2.0", 00:29:58.498 "id": 1, 00:29:58.499 "result": true 00:29:58.499 } 00:29:58.499 00:29:58.499 INFO: response: 00:29:58.499 { 00:29:58.499 "jsonrpc": "2.0", 00:29:58.499 "id": 1, 00:29:58.499 "result": true 00:29:58.499 } 00:29:58.499 00:29:58.499 17:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.499 17:55:19 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.499 17:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.499 17:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:58.499 INFO: Setting log level to 40 00:29:58.499 INFO: Setting log level to 40 00:29:58.499 INFO: Setting log level to 40 00:29:58.499 [2024-07-24 17:55:19.899187] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.499 17:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.499 17:55:19 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:58.499 17:55:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:58.499 17:55:19 -- common/autotest_common.sh@10 -- # set +x 00:29:58.499 17:55:19 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:58.499 17:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.499 17:55:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 Nvme0n1 00:30:01.788 17:55:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:22 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:01.788 17:55:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.788 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 17:55:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:22 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:01.788 17:55:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.788 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 17:55:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:22 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.788 17:55:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.788 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 [2024-07-24 17:55:22.791040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.788 17:55:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:22 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:01.788 17:55:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.788 17:55:22 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 [2024-07-24 17:55:22.798820] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:01.788 [ 00:30:01.788 { 00:30:01.788 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:01.788 "subtype": "Discovery", 00:30:01.788 "listen_addresses": [], 00:30:01.788 "allow_any_host": true, 00:30:01.788 "hosts": [] 00:30:01.788 }, 00:30:01.788 { 00:30:01.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.788 "subtype": "NVMe", 00:30:01.788 "listen_addresses": [ 00:30:01.788 { 00:30:01.788 "transport": "TCP", 00:30:01.788 "trtype": "TCP", 00:30:01.788 "adrfam": "IPv4", 00:30:01.788 "traddr": "10.0.0.2", 00:30:01.788 "trsvcid": "4420" 00:30:01.788 } 00:30:01.788 ], 00:30:01.788 "allow_any_host": true, 00:30:01.788 "hosts": [], 00:30:01.788 "serial_number": "SPDK00000000000001", 00:30:01.788 "model_number": "SPDK bdev Controller", 00:30:01.788 "max_namespaces": 1, 00:30:01.788 "min_cntlid": 1, 00:30:01.788 "max_cntlid": 65519, 00:30:01.788 "namespaces": [ 00:30:01.788 { 00:30:01.788 "nsid": 1, 00:30:01.788 "bdev_name": "Nvme0n1", 00:30:01.788 "name": "Nvme0n1", 00:30:01.788 "nguid": "42A7DBEC85514029A816B2071C44E591", 00:30:01.788 "uuid": "42a7dbec-8551-4029-a816-b2071c44e591" 00:30:01.788 } 00:30:01.788 ] 00:30:01.788 } 00:30:01.788 ] 00:30:01.788 17:55:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:22 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.788 17:55:22 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:01.788 17:55:22 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:01.788 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.788 17:55:22 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:30:01.788 17:55:22 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.788 17:55:22 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:01.788 17:55:22 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:01.788 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.788 17:55:23 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:01.788 17:55:23 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:30:01.788 17:55:23 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:01.788 17:55:23 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.788 17:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.788 17:55:23 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 17:55:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.788 17:55:23 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:01.788 17:55:23 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:01.788 17:55:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:01.788 17:55:23 -- nvmf/common.sh@116 -- # sync 00:30:01.788 17:55:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:01.788 17:55:23 -- nvmf/common.sh@119 -- # set +e 00:30:01.788 17:55:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:01.788 17:55:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:01.788 rmmod nvme_tcp 00:30:01.788 rmmod nvme_fabrics 00:30:01.788 rmmod nvme_keyring 00:30:01.788 17:55:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:01.788 17:55:23 -- nvmf/common.sh@123 -- # set -e 00:30:01.789 17:55:23 -- nvmf/common.sh@124 -- # return 0 00:30:01.789 17:55:23 -- nvmf/common.sh@477 -- # '[' -n 793259 ']' 00:30:01.789 17:55:23 -- nvmf/common.sh@478 -- # killprocess 793259 00:30:01.789 17:55:23 -- common/autotest_common.sh@926 -- # '[' -z 793259 ']' 00:30:01.789 17:55:23 -- common/autotest_common.sh@930 -- # kill -0 793259 00:30:01.789 17:55:23 -- common/autotest_common.sh@931 -- # uname 00:30:01.789 17:55:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:01.789 17:55:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 793259 00:30:01.789 17:55:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:01.789 17:55:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:01.789 17:55:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 793259' 00:30:01.789 killing process with pid 793259 00:30:01.789 17:55:23 -- common/autotest_common.sh@945 -- # kill 793259 00:30:01.789 [2024-07-24 17:55:23.166287] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:01.789 17:55:23 -- common/autotest_common.sh@950 -- # wait 793259 00:30:03.168 17:55:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:03.168 17:55:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:03.168 17:55:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:03.168 17:55:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.168 17:55:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:03.168 17:55:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.169 17:55:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:03.169 17:55:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.709 17:55:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:05.709 00:30:05.709 real 0m21.303s 00:30:05.709 user 0m29.290s 00:30:05.709 sys 0m4.545s 00:30:05.709 17:55:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.709 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:30:05.709 ************************************ 00:30:05.709 END TEST nvmf_identify_passthru 00:30:05.709 ************************************ 00:30:05.709 17:55:26 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:05.709 17:55:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:05.709 17:55:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:05.709 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:30:05.709 ************************************ 00:30:05.709 START TEST nvmf_dif 00:30:05.709 ************************************ 00:30:05.709 17:55:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:05.709 * Looking for test storage... 00:30:05.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.709 17:55:26 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.709 17:55:26 -- nvmf/common.sh@7 -- # uname -s 00:30:05.709 17:55:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.709 17:55:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.709 17:55:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.709 17:55:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.709 17:55:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.709 17:55:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.709 17:55:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.709 17:55:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.709 17:55:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.709 17:55:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.709 17:55:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.709 17:55:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.709 17:55:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.709 17:55:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.709 17:55:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.710 17:55:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.710 17:55:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.710 17:55:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.710 17:55:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.710 17:55:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.710 17:55:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.710 17:55:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.710 17:55:26 -- paths/export.sh@5 -- # export PATH 00:30:05.710 17:55:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.710 17:55:26 -- nvmf/common.sh@46 -- # : 0 00:30:05.710 17:55:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:05.710 17:55:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:05.710 17:55:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:05.710 17:55:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.710 17:55:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.710 17:55:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:05.710 17:55:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:05.710 17:55:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:05.710 17:55:26 -- target/dif.sh@15 -- # NULL_META=16 00:30:05.710 17:55:26 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:05.710 17:55:26 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:05.710 17:55:26 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:05.710 17:55:26 -- target/dif.sh@135 -- # nvmftestinit 00:30:05.710 17:55:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:05.710 17:55:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.710 17:55:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:05.710 17:55:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:05.710 17:55:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:05.710 17:55:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.710 17:55:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:05.710 17:55:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.710 17:55:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:05.710 17:55:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:05.710 17:55:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:05.710 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:30:11.024 17:55:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:11.024 17:55:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:11.024 17:55:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:11.024 17:55:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:11.024 17:55:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:11.024 17:55:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:11.024 17:55:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:11.024 17:55:31 -- nvmf/common.sh@294 -- # net_devs=() 00:30:11.024 17:55:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:11.024 17:55:31 -- nvmf/common.sh@295 -- # e810=() 00:30:11.024 17:55:31 -- nvmf/common.sh@295 -- # local -ga e810 00:30:11.024 17:55:31 -- nvmf/common.sh@296 -- # x722=() 00:30:11.024 17:55:31 -- nvmf/common.sh@296 -- # local -ga x722 00:30:11.024 17:55:31 -- nvmf/common.sh@297 -- # mlx=() 00:30:11.024 17:55:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:11.024 17:55:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.024 17:55:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:11.024 17:55:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:11.024 17:55:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:11.024 17:55:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:11.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:11.024 17:55:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:11.024 17:55:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:11.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:11.024 17:55:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:11.024 17:55:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.024 17:55:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.024 17:55:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:11.024 Found net devices under 0000:86:00.0: cvl_0_0 00:30:11.024 17:55:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.024 17:55:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:11.024 17:55:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.024 17:55:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.024 17:55:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:11.024 Found net devices under 0000:86:00.1: cvl_0_1 00:30:11.024 17:55:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.024 17:55:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:11.024 17:55:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:11.024 17:55:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:11.024 17:55:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.024 17:55:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.024 17:55:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.024 17:55:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:11.024 17:55:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.024 17:55:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.024 17:55:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:11.024 17:55:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.024 17:55:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.024 17:55:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:11.024 17:55:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:11.024 17:55:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.024 17:55:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.024 17:55:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.024 17:55:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.024 17:55:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:11.024 17:55:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.024 17:55:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.025 17:55:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.025 17:55:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:11.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:30:11.025 00:30:11.025 --- 10.0.0.2 ping statistics --- 00:30:11.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.025 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:11.025 17:55:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:30:11.025 00:30:11.025 --- 10.0.0.1 ping statistics --- 00:30:11.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.025 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:30:11.025 17:55:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.025 17:55:31 -- nvmf/common.sh@410 -- # return 0 00:30:11.025 17:55:31 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:11.025 17:55:31 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:12.927 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:12.927 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:12.927 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:12.927 17:55:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.927 17:55:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:12.927 17:55:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:12.927 17:55:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.927 17:55:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:12.927 17:55:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:12.927 17:55:34 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:12.927 17:55:34 -- target/dif.sh@137 -- # nvmfappstart 00:30:12.927 17:55:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:12.927 17:55:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:12.927 17:55:34 -- common/autotest_common.sh@10 -- # set +x 00:30:12.927 17:55:34 -- nvmf/common.sh@469 -- # nvmfpid=798572 00:30:12.927 17:55:34 -- nvmf/common.sh@470 -- # waitforlisten 798572 00:30:12.927 17:55:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:12.927 17:55:34 -- common/autotest_common.sh@819 -- # '[' -z 798572 ']' 00:30:12.927 17:55:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.927 17:55:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:12.928 17:55:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.928 17:55:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:12.928 17:55:34 -- common/autotest_common.sh@10 -- # set +x 00:30:12.928 [2024-07-24 17:55:34.336223] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:12.928 [2024-07-24 17:55:34.336267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.928 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.928 [2024-07-24 17:55:34.395595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.928 [2024-07-24 17:55:34.468864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:12.928 [2024-07-24 17:55:34.468997] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.928 [2024-07-24 17:55:34.469006] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.928 [2024-07-24 17:55:34.469012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.928 [2024-07-24 17:55:34.469031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.864 17:55:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:13.864 17:55:35 -- common/autotest_common.sh@852 -- # return 0 00:30:13.864 17:55:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:13.864 17:55:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 17:55:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.864 17:55:35 -- target/dif.sh@139 -- # create_transport 00:30:13.864 17:55:35 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:13.864 17:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 [2024-07-24 17:55:35.163959] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.864 17:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.864 17:55:35 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:13.864 17:55:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:13.864 17:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 ************************************ 00:30:13.864 START TEST fio_dif_1_default 00:30:13.864 ************************************ 00:30:13.864 17:55:35 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:13.864 17:55:35 -- target/dif.sh@86 -- # create_subsystems 0 00:30:13.864 17:55:35 -- target/dif.sh@28 -- # local sub 00:30:13.864 17:55:35 -- target/dif.sh@30 -- # for sub in "$@" 00:30:13.864 17:55:35 -- target/dif.sh@31 -- # create_subsystem 0 00:30:13.864 17:55:35 -- target/dif.sh@18 -- # local sub_id=0 00:30:13.864 17:55:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:13.864 17:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 bdev_null0 00:30:13.864 17:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.864 17:55:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:13.864 17:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 17:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.864 17:55:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:13.864 17:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 17:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.864 17:55:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.864 17:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.864 17:55:35 -- common/autotest_common.sh@10 -- # set +x 00:30:13.864 [2024-07-24 17:55:35.208206] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.864 17:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.864 17:55:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:13.864 17:55:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:13.864 17:55:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:13.864 17:55:35 -- nvmf/common.sh@520 -- # config=() 00:30:13.864 17:55:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.864 17:55:35 -- nvmf/common.sh@520 -- # local subsystem config 00:30:13.864 17:55:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.864 17:55:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:13.864 17:55:35 -- target/dif.sh@82 -- # gen_fio_conf 00:30:13.864 17:55:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:13.864 { 00:30:13.864 "params": { 00:30:13.864 "name": "Nvme$subsystem", 00:30:13.864 "trtype": "$TEST_TRANSPORT", 00:30:13.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.864 "adrfam": "ipv4", 00:30:13.864 "trsvcid": "$NVMF_PORT", 00:30:13.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.864 "hdgst": ${hdgst:-false}, 00:30:13.864 "ddgst": ${ddgst:-false} 00:30:13.864 }, 00:30:13.864 "method": "bdev_nvme_attach_controller" 00:30:13.864 } 00:30:13.864 EOF 00:30:13.864 )") 00:30:13.864 17:55:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:13.864 17:55:35 -- target/dif.sh@54 -- # local file 00:30:13.864 17:55:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.864 17:55:35 -- target/dif.sh@56 -- # cat 00:30:13.864 17:55:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:13.864 17:55:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.864 17:55:35 -- common/autotest_common.sh@1320 -- # shift 00:30:13.864 17:55:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:13.864 17:55:35 -- nvmf/common.sh@542 -- # cat 00:30:13.864 17:55:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.864 17:55:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:13.864 17:55:35 -- target/dif.sh@72 -- # (( file <= files )) 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:13.864 17:55:35 -- nvmf/common.sh@544 -- # jq . 00:30:13.864 17:55:35 -- nvmf/common.sh@545 -- # IFS=, 00:30:13.864 17:55:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:13.864 "params": { 00:30:13.864 "name": "Nvme0", 00:30:13.864 "trtype": "tcp", 00:30:13.864 "traddr": "10.0.0.2", 00:30:13.864 "adrfam": "ipv4", 00:30:13.864 "trsvcid": "4420", 00:30:13.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.864 "hdgst": false, 00:30:13.864 "ddgst": false 00:30:13.864 }, 00:30:13.864 "method": "bdev_nvme_attach_controller" 00:30:13.864 }' 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:13.864 17:55:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:13.864 17:55:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:13.864 17:55:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:13.864 17:55:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:13.864 17:55:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:13.864 17:55:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.124 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.124 fio-3.35 00:30:14.124 Starting 1 thread 00:30:14.124 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.693 [2024-07-24 17:55:36.030651] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:14.693 [2024-07-24 17:55:36.030697] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:24.665 00:30:24.665 filename0: (groupid=0, jobs=1): err= 0: pid=799107: Wed Jul 24 17:55:46 2024 00:30:24.665 read: IOPS=180, BW=724KiB/s (741kB/s)(7248KiB/10016msec) 00:30:24.665 slat (nsec): min=4054, max=18788, avg=6062.89, stdev=687.28 00:30:24.665 clat (usec): min=1602, max=45123, avg=22091.79, stdev=20335.58 00:30:24.665 lat (usec): min=1607, max=45135, avg=22097.85, stdev=20335.52 00:30:24.665 clat percentiles (usec): 00:30:24.665 | 1.00th=[ 1614], 5.00th=[ 1631], 10.00th=[ 1631], 20.00th=[ 1647], 00:30:24.665 | 30.00th=[ 1680], 40.00th=[ 1778], 50.00th=[42206], 60.00th=[42206], 00:30:24.665 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:30:24.665 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:30:24.665 | 99.99th=[45351] 00:30:24.665 bw ( KiB/s): min= 704, max= 768, per=99.91%, avg=723.20, stdev=21.78, samples=20 00:30:24.665 iops : min= 176, max= 192, avg=180.80, stdev= 5.44, samples=20 00:30:24.665 lat (msec) : 2=48.79%, 4=1.10%, 50=50.11% 00:30:24.665 cpu : usr=95.29%, sys=4.45%, ctx=20, majf=0, minf=253 00:30:24.665 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.665 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.665 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:24.665 00:30:24.665 Run status group 0 (all jobs): 00:30:24.665 READ: bw=724KiB/s (741kB/s), 724KiB/s-724KiB/s (741kB/s-741kB/s), io=7248KiB (7422kB), run=10016-10016msec 00:30:24.925 17:55:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:24.925 17:55:46 -- target/dif.sh@43 -- # local sub 00:30:24.925 17:55:46 -- target/dif.sh@45 -- # for sub in "$@" 00:30:24.925 17:55:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:24.925 17:55:46 -- target/dif.sh@36 -- # local sub_id=0 00:30:24.925 17:55:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.925 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.925 17:55:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:24.925 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.925 00:30:24.925 real 0m11.171s 00:30:24.925 user 0m16.423s 00:30:24.925 sys 0m0.712s 00:30:24.925 17:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 ************************************ 00:30:24.925 END TEST fio_dif_1_default 00:30:24.925 ************************************ 00:30:24.925 17:55:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:24.925 17:55:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:24.925 17:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 ************************************ 00:30:24.925 START TEST fio_dif_1_multi_subsystems 00:30:24.925 ************************************ 00:30:24.925 17:55:46 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:30:24.925 17:55:46 -- target/dif.sh@92 -- # local files=1 00:30:24.925 17:55:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:24.925 17:55:46 -- target/dif.sh@28 -- # local sub 00:30:24.925 17:55:46 -- target/dif.sh@30 -- # for sub in "$@" 00:30:24.925 17:55:46 -- target/dif.sh@31 -- # create_subsystem 0 00:30:24.925 17:55:46 -- target/dif.sh@18 -- # local sub_id=0 00:30:24.925 17:55:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:24.925 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 bdev_null0 00:30:24.925 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.925 17:55:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:24.925 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.925 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.925 17:55:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:24.925 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.925 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.926 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.926 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 [2024-07-24 17:55:46.422727] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@30 -- # for sub in "$@" 00:30:24.926 17:55:46 -- target/dif.sh@31 -- # create_subsystem 1 00:30:24.926 17:55:46 -- target/dif.sh@18 -- # local sub_id=1 00:30:24.926 17:55:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:24.926 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.926 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 bdev_null1 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:24.926 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.926 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:24.926 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.926 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.926 17:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.926 17:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:24.926 17:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.926 17:55:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:24.926 17:55:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:24.926 17:55:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:24.926 17:55:46 -- nvmf/common.sh@520 -- # config=() 00:30:24.926 17:55:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.926 17:55:46 -- nvmf/common.sh@520 -- # local subsystem config 00:30:24.926 17:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:24.926 17:55:46 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.926 17:55:46 -- target/dif.sh@82 -- # gen_fio_conf 00:30:24.926 17:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:24.926 { 00:30:24.926 "params": { 00:30:24.926 "name": "Nvme$subsystem", 00:30:24.926 "trtype": "$TEST_TRANSPORT", 00:30:24.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.926 "adrfam": "ipv4", 00:30:24.926 "trsvcid": "$NVMF_PORT", 00:30:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.926 "hdgst": ${hdgst:-false}, 00:30:24.926 "ddgst": ${ddgst:-false} 00:30:24.926 }, 00:30:24.926 "method": "bdev_nvme_attach_controller" 00:30:24.926 } 00:30:24.926 EOF 00:30:24.926 )") 00:30:24.926 17:55:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:24.926 17:55:46 -- target/dif.sh@54 -- # local file 00:30:24.926 17:55:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.926 17:55:46 -- target/dif.sh@56 -- # cat 00:30:24.926 17:55:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:24.926 17:55:46 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.926 17:55:46 -- common/autotest_common.sh@1320 -- # shift 00:30:24.926 17:55:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:24.926 17:55:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.926 17:55:46 -- nvmf/common.sh@542 -- # cat 00:30:24.926 17:55:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.926 17:55:46 -- target/dif.sh@72 -- # (( file <= files )) 00:30:24.926 17:55:46 -- target/dif.sh@73 -- # cat 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:24.926 17:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:24.926 17:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:24.926 { 00:30:24.926 "params": { 00:30:24.926 "name": "Nvme$subsystem", 00:30:24.926 "trtype": "$TEST_TRANSPORT", 00:30:24.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.926 "adrfam": "ipv4", 00:30:24.926 "trsvcid": "$NVMF_PORT", 00:30:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.926 "hdgst": ${hdgst:-false}, 00:30:24.926 "ddgst": ${ddgst:-false} 00:30:24.926 }, 00:30:24.926 "method": "bdev_nvme_attach_controller" 00:30:24.926 } 00:30:24.926 EOF 00:30:24.926 )") 00:30:24.926 17:55:46 -- target/dif.sh@72 -- # (( file++ )) 00:30:24.926 17:55:46 -- nvmf/common.sh@542 -- # cat 00:30:24.926 17:55:46 -- target/dif.sh@72 -- # (( file <= files )) 00:30:24.926 17:55:46 -- nvmf/common.sh@544 -- # jq . 00:30:24.926 17:55:46 -- nvmf/common.sh@545 -- # IFS=, 00:30:24.926 17:55:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:24.926 "params": { 00:30:24.926 "name": "Nvme0", 00:30:24.926 "trtype": "tcp", 00:30:24.926 "traddr": "10.0.0.2", 00:30:24.926 "adrfam": "ipv4", 00:30:24.926 "trsvcid": "4420", 00:30:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.926 "hdgst": false, 00:30:24.926 "ddgst": false 00:30:24.926 }, 00:30:24.926 "method": "bdev_nvme_attach_controller" 00:30:24.926 },{ 00:30:24.926 "params": { 00:30:24.926 "name": "Nvme1", 00:30:24.926 "trtype": "tcp", 00:30:24.926 "traddr": "10.0.0.2", 00:30:24.926 "adrfam": "ipv4", 00:30:24.926 "trsvcid": "4420", 00:30:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.926 "hdgst": false, 00:30:24.926 "ddgst": false 00:30:24.926 }, 00:30:24.926 "method": "bdev_nvme_attach_controller" 00:30:24.926 }' 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:24.926 17:55:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:24.926 17:55:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:24.926 17:55:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:25.192 17:55:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:25.192 17:55:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:25.192 17:55:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:25.192 17:55:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.449 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:25.449 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:25.449 fio-3.35 00:30:25.449 Starting 2 threads 00:30:25.449 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.014 [2024-07-24 17:55:47.319344] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:26.014 [2024-07-24 17:55:47.319387] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:35.979 00:30:35.979 filename0: (groupid=0, jobs=1): err= 0: pid=800977: Wed Jul 24 17:55:57 2024 00:30:35.979 read: IOPS=180, BW=724KiB/s (741kB/s)(7264KiB/10037msec) 00:30:35.979 slat (nsec): min=5910, max=24170, avg=7071.96, stdev=1893.80 00:30:35.979 clat (usec): min=1205, max=43484, avg=22086.65, stdev=20292.10 00:30:35.979 lat (usec): min=1211, max=43508, avg=22093.72, stdev=20291.52 00:30:35.979 clat percentiles (usec): 00:30:35.979 | 1.00th=[ 1450], 5.00th=[ 1631], 10.00th=[ 1647], 20.00th=[ 1663], 00:30:35.979 | 30.00th=[ 1680], 40.00th=[ 1811], 50.00th=[41681], 60.00th=[42206], 00:30:35.979 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:35.979 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:35.979 | 99.99th=[43254] 00:30:35.979 bw ( KiB/s): min= 672, max= 768, per=65.63%, avg=724.80, stdev=33.28, samples=20 00:30:35.979 iops : min= 168, max= 192, avg=181.20, stdev= 8.32, samples=20 00:30:35.979 lat (msec) : 2=49.12%, 4=0.66%, 50=50.22% 00:30:35.979 cpu : usr=97.92%, sys=1.83%, ctx=12, majf=0, minf=134 00:30:35.979 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.979 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.979 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:35.979 filename1: (groupid=0, jobs=1): err= 0: pid=800978: Wed Jul 24 17:55:57 2024 00:30:35.979 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10013msec) 00:30:35.979 slat (nsec): min=5939, max=24897, avg=7663.39, stdev=2398.00 00:30:35.979 clat (usec): min=41728, max=43963, avg=42044.78, stdev=262.51 00:30:35.979 lat (usec): min=41734, max=43988, avg=42052.44, stdev=262.85 00:30:35.979 clat percentiles (usec): 00:30:35.979 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:35.979 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:35.979 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:30:35.979 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:35.979 | 99.99th=[43779] 00:30:35.979 bw ( KiB/s): min= 352, max= 384, per=34.36%, avg=379.20, stdev=11.72, samples=20 00:30:35.979 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:30:35.979 lat (msec) : 50=100.00% 00:30:35.979 cpu : usr=98.09%, sys=1.66%, ctx=7, majf=0, minf=165 00:30:35.979 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.979 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.979 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:35.979 00:30:35.979 Run status group 0 (all jobs): 00:30:35.979 READ: bw=1103KiB/s (1130kB/s), 380KiB/s-724KiB/s (389kB/s-741kB/s), io=10.8MiB (11.3MB), run=10013-10037msec 00:30:36.238 17:55:57 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:36.238 17:55:57 -- target/dif.sh@43 -- # local sub 00:30:36.238 17:55:57 -- target/dif.sh@45 -- # for sub in "$@" 00:30:36.238 17:55:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:36.238 17:55:57 -- target/dif.sh@36 -- # local sub_id=0 00:30:36.238 17:55:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.238 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.238 17:55:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:36.238 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.238 17:55:57 -- target/dif.sh@45 -- # for sub in "$@" 00:30:36.238 17:55:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:36.238 17:55:57 -- target/dif.sh@36 -- # local sub_id=1 00:30:36.238 17:55:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.238 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.238 17:55:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:36.238 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.238 00:30:36.238 real 0m11.301s 00:30:36.238 user 0m26.612s 00:30:36.238 sys 0m0.629s 00:30:36.238 17:55:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 ************************************ 00:30:36.238 END TEST fio_dif_1_multi_subsystems 00:30:36.238 ************************************ 00:30:36.238 17:55:57 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:36.238 17:55:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:36.238 17:55:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.238 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.238 ************************************ 00:30:36.238 START TEST fio_dif_rand_params 00:30:36.238 ************************************ 00:30:36.238 17:55:57 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:30:36.238 17:55:57 -- target/dif.sh@100 -- # local NULL_DIF 00:30:36.238 17:55:57 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:36.238 17:55:57 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:36.239 17:55:57 -- target/dif.sh@103 -- # bs=128k 00:30:36.239 17:55:57 -- target/dif.sh@103 -- # numjobs=3 00:30:36.239 17:55:57 -- target/dif.sh@103 -- # iodepth=3 00:30:36.239 17:55:57 -- target/dif.sh@103 -- # runtime=5 00:30:36.239 17:55:57 -- target/dif.sh@105 -- # create_subsystems 0 00:30:36.239 17:55:57 -- target/dif.sh@28 -- # local sub 00:30:36.239 17:55:57 -- target/dif.sh@30 -- # for sub in "$@" 00:30:36.239 17:55:57 -- target/dif.sh@31 -- # create_subsystem 0 00:30:36.239 17:55:57 -- target/dif.sh@18 -- # local sub_id=0 00:30:36.239 17:55:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:36.239 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.239 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 bdev_null0 00:30:36.239 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.239 17:55:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:36.239 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.239 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.239 17:55:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:36.239 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.239 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.239 17:55:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.239 17:55:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.239 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 [2024-07-24 17:55:57.762649] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.239 17:55:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.239 17:55:57 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:36.239 17:55:57 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:36.239 17:55:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:36.239 17:55:57 -- nvmf/common.sh@520 -- # config=() 00:30:36.239 17:55:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.239 17:55:57 -- nvmf/common.sh@520 -- # local subsystem config 00:30:36.239 17:55:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.239 17:55:57 -- target/dif.sh@82 -- # gen_fio_conf 00:30:36.239 17:55:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:36.239 17:55:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:36.239 17:55:57 -- target/dif.sh@54 -- # local file 00:30:36.239 17:55:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:36.239 { 00:30:36.239 "params": { 00:30:36.239 "name": "Nvme$subsystem", 00:30:36.239 "trtype": "$TEST_TRANSPORT", 00:30:36.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.239 "adrfam": "ipv4", 00:30:36.239 "trsvcid": "$NVMF_PORT", 00:30:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.239 "hdgst": ${hdgst:-false}, 00:30:36.239 "ddgst": ${ddgst:-false} 00:30:36.239 }, 00:30:36.239 "method": "bdev_nvme_attach_controller" 00:30:36.239 } 00:30:36.239 EOF 00:30:36.239 )") 00:30:36.239 17:55:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.239 17:55:57 -- target/dif.sh@56 -- # cat 00:30:36.239 17:55:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:36.239 17:55:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.239 17:55:57 -- common/autotest_common.sh@1320 -- # shift 00:30:36.239 17:55:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:36.239 17:55:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.239 17:55:57 -- nvmf/common.sh@542 -- # cat 00:30:36.239 17:55:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.239 17:55:57 -- target/dif.sh@72 -- # (( file <= files )) 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:36.239 17:55:57 -- nvmf/common.sh@544 -- # jq . 00:30:36.239 17:55:57 -- nvmf/common.sh@545 -- # IFS=, 00:30:36.239 17:55:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:36.239 "params": { 00:30:36.239 "name": "Nvme0", 00:30:36.239 "trtype": "tcp", 00:30:36.239 "traddr": "10.0.0.2", 00:30:36.239 "adrfam": "ipv4", 00:30:36.239 "trsvcid": "4420", 00:30:36.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:36.239 "hdgst": false, 00:30:36.239 "ddgst": false 00:30:36.239 }, 00:30:36.239 "method": "bdev_nvme_attach_controller" 00:30:36.239 }' 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:36.239 17:55:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:36.239 17:55:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:36.239 17:55:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:36.239 17:55:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:36.239 17:55:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:36.239 17:55:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.804 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:36.804 ... 00:30:36.804 fio-3.35 00:30:36.804 Starting 3 threads 00:30:36.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.062 [2024-07-24 17:55:58.535630] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:37.062 [2024-07-24 17:55:58.535676] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:42.338 00:30:42.338 filename0: (groupid=0, jobs=1): err= 0: pid=802934: Wed Jul 24 17:56:03 2024 00:30:42.338 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5003msec) 00:30:42.338 slat (nsec): min=6186, max=49855, avg=8920.02, stdev=3099.40 00:30:42.338 clat (usec): min=4717, max=62243, avg=13996.84, stdev=14148.69 00:30:42.338 lat (usec): min=4724, max=62273, avg=14005.76, stdev=14148.88 00:30:42.338 clat percentiles (usec): 00:30:42.338 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7177], 00:30:42.338 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9503], 00:30:42.338 | 70.00th=[10814], 80.00th=[13698], 90.00th=[50594], 95.00th=[53216], 00:30:42.338 | 99.00th=[56361], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:30:42.338 | 99.99th=[62129] 00:30:42.338 bw ( KiB/s): min=19200, max=36096, per=31.63%, avg=27989.33, stdev=7154.27, samples=9 00:30:42.338 iops : min= 150, max= 282, avg=218.67, stdev=55.89, samples=9 00:30:42.338 lat (msec) : 10=63.96%, 20=24.84%, 50=0.65%, 100=10.55% 00:30:42.338 cpu : usr=95.74%, sys=3.34%, ctx=16, majf=0, minf=108 00:30:42.338 IO depths : 1=4.7%, 2=95.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.338 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.338 filename0: (groupid=0, jobs=1): err= 0: pid=802935: Wed Jul 24 17:56:03 2024 00:30:42.338 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(194MiB/5006msec) 00:30:42.338 slat (nsec): min=6149, max=32226, avg=8768.86, stdev=2623.29 00:30:42.338 clat (usec): min=4920, max=59381, avg=9669.26, stdev=9452.11 00:30:42.338 lat (usec): min=4927, max=59390, avg=9678.03, stdev=9452.34 00:30:42.338 clat percentiles (usec): 00:30:42.338 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6063], 00:30:42.338 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7635], 00:30:42.338 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[11600], 95.00th=[16712], 00:30:42.338 | 99.00th=[54264], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:30:42.338 | 99.99th=[59507] 00:30:42.338 bw ( KiB/s): min=28928, max=59904, per=44.79%, avg=39636.00, stdev=9025.15, samples=10 00:30:42.338 iops : min= 226, max= 468, avg=309.60, stdev=70.53, samples=10 00:30:42.338 lat (msec) : 10=85.04%, 20=10.32%, 50=1.93%, 100=2.71% 00:30:42.338 cpu : usr=95.28%, sys=3.98%, ctx=17, majf=0, minf=159 00:30:42.338 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 issued rwts: total=1551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.338 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.338 filename0: (groupid=0, jobs=1): err= 0: pid=802936: Wed Jul 24 17:56:03 2024 00:30:42.338 read: IOPS=169, BW=21.1MiB/s (22.2MB/s)(106MiB/5021msec) 00:30:42.338 slat (nsec): min=6148, max=39761, avg=9130.14, stdev=2992.49 00:30:42.338 clat (usec): min=5890, max=58924, avg=17724.22, stdev=16922.76 00:30:42.338 lat (usec): min=5897, max=58936, avg=17733.35, stdev=16922.78 00:30:42.338 clat percentiles (usec): 00:30:42.338 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7570], 00:30:42.338 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[11338], 00:30:42.338 | 70.00th=[13173], 80.00th=[16581], 90.00th=[52167], 95.00th=[54264], 00:30:42.338 | 99.00th=[57410], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:30:42.338 | 99.99th=[58983] 00:30:42.338 bw ( KiB/s): min=14592, max=29952, per=24.48%, avg=21657.60, stdev=4759.12, samples=10 00:30:42.338 iops : min= 114, max= 234, avg=169.20, stdev=37.18, samples=10 00:30:42.338 lat (msec) : 10=51.59%, 20=30.04%, 50=2.94%, 100=15.43% 00:30:42.338 cpu : usr=95.92%, sys=3.21%, ctx=13, majf=0, minf=96 00:30:42.338 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.338 issued rwts: total=849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.338 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.338 00:30:42.338 Run status group 0 (all jobs): 00:30:42.338 READ: bw=86.4MiB/s (90.6MB/s), 21.1MiB/s-38.7MiB/s (22.2MB/s-40.6MB/s), io=434MiB (455MB), run=5003-5021msec 00:30:42.338 17:56:03 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:42.338 17:56:03 -- target/dif.sh@43 -- # local sub 00:30:42.338 17:56:03 -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.338 17:56:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:42.338 17:56:03 -- target/dif.sh@36 -- # local sub_id=0 00:30:42.338 17:56:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.338 17:56:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # bs=4k 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # numjobs=8 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # iodepth=16 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # runtime= 00:30:42.338 17:56:03 -- target/dif.sh@109 -- # files=2 00:30:42.338 17:56:03 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:42.338 17:56:03 -- target/dif.sh@28 -- # local sub 00:30:42.338 17:56:03 -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.338 17:56:03 -- target/dif.sh@31 -- # create_subsystem 0 00:30:42.338 17:56:03 -- target/dif.sh@18 -- # local sub_id=0 00:30:42.338 17:56:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 bdev_null0 00:30:42.338 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.338 17:56:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.338 17:56:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.338 17:56:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.338 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.338 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.338 [2024-07-24 17:56:03.931199] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.598 17:56:03 -- target/dif.sh@31 -- # create_subsystem 1 00:30:42.598 17:56:03 -- target/dif.sh@18 -- # local sub_id=1 00:30:42.598 17:56:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 bdev_null1 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.598 17:56:03 -- target/dif.sh@31 -- # create_subsystem 2 00:30:42.598 17:56:03 -- target/dif.sh@18 -- # local sub_id=2 00:30:42.598 17:56:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 bdev_null2 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:42.598 17:56:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.598 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.598 17:56:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.598 17:56:03 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:42.598 17:56:03 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:42.598 17:56:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:42.598 17:56:04 -- nvmf/common.sh@520 -- # config=() 00:30:42.598 17:56:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.598 17:56:04 -- nvmf/common.sh@520 -- # local subsystem config 00:30:42.598 17:56:04 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.598 17:56:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:42.598 17:56:04 -- target/dif.sh@82 -- # gen_fio_conf 00:30:42.598 17:56:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:42.598 17:56:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:42.598 { 00:30:42.598 "params": { 00:30:42.598 "name": "Nvme$subsystem", 00:30:42.598 "trtype": "$TEST_TRANSPORT", 00:30:42.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.598 "adrfam": "ipv4", 00:30:42.598 "trsvcid": "$NVMF_PORT", 00:30:42.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.598 "hdgst": ${hdgst:-false}, 00:30:42.598 "ddgst": ${ddgst:-false} 00:30:42.598 }, 00:30:42.598 "method": "bdev_nvme_attach_controller" 00:30:42.598 } 00:30:42.598 EOF 00:30:42.598 )") 00:30:42.599 17:56:04 -- target/dif.sh@54 -- # local file 00:30:42.599 17:56:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.599 17:56:04 -- target/dif.sh@56 -- # cat 00:30:42.599 17:56:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:42.599 17:56:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.599 17:56:04 -- common/autotest_common.sh@1320 -- # shift 00:30:42.599 17:56:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:42.599 17:56:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.599 17:56:04 -- nvmf/common.sh@542 -- # cat 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.599 17:56:04 -- target/dif.sh@73 -- # cat 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:42.599 17:56:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:42.599 17:56:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:42.599 { 00:30:42.599 "params": { 00:30:42.599 "name": "Nvme$subsystem", 00:30:42.599 "trtype": "$TEST_TRANSPORT", 00:30:42.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.599 "adrfam": "ipv4", 00:30:42.599 "trsvcid": "$NVMF_PORT", 00:30:42.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.599 "hdgst": ${hdgst:-false}, 00:30:42.599 "ddgst": ${ddgst:-false} 00:30:42.599 }, 00:30:42.599 "method": "bdev_nvme_attach_controller" 00:30:42.599 } 00:30:42.599 EOF 00:30:42.599 )") 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file++ )) 00:30:42.599 17:56:04 -- nvmf/common.sh@542 -- # cat 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.599 17:56:04 -- target/dif.sh@73 -- # cat 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file++ )) 00:30:42.599 17:56:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.599 17:56:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:42.599 17:56:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:42.599 { 00:30:42.599 "params": { 00:30:42.599 "name": "Nvme$subsystem", 00:30:42.599 "trtype": "$TEST_TRANSPORT", 00:30:42.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.599 "adrfam": "ipv4", 00:30:42.599 "trsvcid": "$NVMF_PORT", 00:30:42.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.599 "hdgst": ${hdgst:-false}, 00:30:42.599 "ddgst": ${ddgst:-false} 00:30:42.599 }, 00:30:42.599 "method": "bdev_nvme_attach_controller" 00:30:42.599 } 00:30:42.599 EOF 00:30:42.599 )") 00:30:42.599 17:56:04 -- nvmf/common.sh@542 -- # cat 00:30:42.599 17:56:04 -- nvmf/common.sh@544 -- # jq . 00:30:42.599 17:56:04 -- nvmf/common.sh@545 -- # IFS=, 00:30:42.599 17:56:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:42.599 "params": { 00:30:42.599 "name": "Nvme0", 00:30:42.599 "trtype": "tcp", 00:30:42.599 "traddr": "10.0.0.2", 00:30:42.599 "adrfam": "ipv4", 00:30:42.599 "trsvcid": "4420", 00:30:42.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.599 "hdgst": false, 00:30:42.599 "ddgst": false 00:30:42.599 }, 00:30:42.599 "method": "bdev_nvme_attach_controller" 00:30:42.599 },{ 00:30:42.599 "params": { 00:30:42.599 "name": "Nvme1", 00:30:42.599 "trtype": "tcp", 00:30:42.599 "traddr": "10.0.0.2", 00:30:42.599 "adrfam": "ipv4", 00:30:42.599 "trsvcid": "4420", 00:30:42.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.599 "hdgst": false, 00:30:42.599 "ddgst": false 00:30:42.599 }, 00:30:42.599 "method": "bdev_nvme_attach_controller" 00:30:42.599 },{ 00:30:42.599 "params": { 00:30:42.599 "name": "Nvme2", 00:30:42.599 "trtype": "tcp", 00:30:42.599 "traddr": "10.0.0.2", 00:30:42.599 "adrfam": "ipv4", 00:30:42.599 "trsvcid": "4420", 00:30:42.599 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:42.599 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:42.599 "hdgst": false, 00:30:42.599 "ddgst": false 00:30:42.599 }, 00:30:42.599 "method": "bdev_nvme_attach_controller" 00:30:42.599 }' 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:42.599 17:56:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:42.599 17:56:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:42.599 17:56:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:42.599 17:56:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:42.599 17:56:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:42.599 17:56:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.858 ... 00:30:42.858 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.858 ... 00:30:42.858 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.858 ... 00:30:42.858 fio-3.35 00:30:42.858 Starting 24 threads 00:30:42.858 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.805 [2024-07-24 17:56:05.210909] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:43.805 [2024-07-24 17:56:05.210945] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:56.001 00:30:56.001 filename0: (groupid=0, jobs=1): err= 0: pid=804235: Wed Jul 24 17:56:15 2024 00:30:56.001 read: IOPS=781, BW=3124KiB/s (3199kB/s)(30.6MiB/10020msec) 00:30:56.001 slat (nsec): min=6222, max=82195, avg=24569.30, stdev=17225.56 00:30:56.001 clat (usec): min=2099, max=44187, avg=20286.81, stdev=5021.09 00:30:56.001 lat (usec): min=2106, max=44224, avg=20311.38, stdev=5029.62 00:30:56.001 clat percentiles (usec): 00:30:56.001 | 1.00th=[ 4555], 5.00th=[13960], 10.00th=[14877], 20.00th=[15926], 00:30:56.001 | 30.00th=[16712], 40.00th=[17695], 50.00th=[20841], 60.00th=[22938], 00:30:56.001 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:56.001 | 99.00th=[37487], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:30:56.001 | 99.99th=[44303] 00:30:56.001 bw ( KiB/s): min= 2560, max= 3888, per=5.44%, avg=3126.40, stdev=500.11, samples=20 00:30:56.001 iops : min= 640, max= 972, avg=781.60, stdev=125.03, samples=20 00:30:56.001 lat (msec) : 4=0.81%, 10=0.45%, 20=47.89%, 50=50.86% 00:30:56.001 cpu : usr=90.57%, sys=3.87%, ctx=60, majf=0, minf=66 00:30:56.001 IO depths : 1=2.3%, 2=5.2%, 4=15.3%, 8=66.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:30:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 issued rwts: total=7826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.001 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.001 filename0: (groupid=0, jobs=1): err= 0: pid=804236: Wed Jul 24 17:56:15 2024 00:30:56.001 read: IOPS=662, BW=2650KiB/s (2713kB/s)(25.9MiB/10020msec) 00:30:56.001 slat (nsec): min=4136, max=62391, avg=13746.13, stdev=9473.59 00:30:56.001 clat (usec): min=2301, max=51529, avg=24055.54, stdev=3600.24 00:30:56.001 lat (usec): min=2307, max=51537, avg=24069.29, stdev=3600.45 00:30:56.001 clat percentiles (usec): 00:30:56.001 | 1.00th=[ 4146], 5.00th=[21627], 10.00th=[22676], 20.00th=[23200], 00:30:56.001 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:30:56.001 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25822], 95.00th=[26346], 00:30:56.001 | 99.00th=[34866], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:30:56.001 | 99.99th=[51643] 00:30:56.001 bw ( KiB/s): min= 2480, max= 2949, per=4.61%, avg=2648.65, stdev=106.79, samples=20 00:30:56.001 iops : min= 620, max= 737, avg=662.15, stdev=26.66, samples=20 00:30:56.001 lat (msec) : 4=0.69%, 10=0.80%, 20=2.88%, 50=95.59%, 100=0.05% 00:30:56.001 cpu : usr=98.65%, sys=0.94%, ctx=17, majf=0, minf=49 00:30:56.001 IO depths : 1=4.2%, 2=8.6%, 4=19.2%, 8=59.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:30:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 issued rwts: total=6637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.001 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.001 filename0: (groupid=0, jobs=1): err= 0: pid=804237: Wed Jul 24 17:56:15 2024 00:30:56.001 read: IOPS=608, BW=2432KiB/s (2490kB/s)(23.8MiB/10008msec) 00:30:56.001 slat (nsec): min=6434, max=84025, avg=26846.72, stdev=16926.58 00:30:56.001 clat (usec): min=10662, max=47120, avg=26167.69, stdev=4722.74 00:30:56.001 lat (usec): min=10695, max=47139, avg=26194.54, stdev=4722.45 00:30:56.001 clat percentiles (usec): 00:30:56.001 | 1.00th=[15401], 5.00th=[20055], 10.00th=[22676], 20.00th=[23462], 00:30:56.001 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:30:56.001 | 70.00th=[26084], 80.00th=[30540], 90.00th=[33162], 95.00th=[34866], 00:30:56.001 | 99.00th=[40109], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:30:56.001 | 99.99th=[46924] 00:30:56.001 bw ( KiB/s): min= 2304, max= 2560, per=4.23%, avg=2432.80, stdev=79.62, samples=20 00:30:56.001 iops : min= 576, max= 640, avg=608.20, stdev=19.90, samples=20 00:30:56.001 lat (msec) : 20=4.96%, 50=95.04% 00:30:56.001 cpu : usr=96.74%, sys=1.80%, ctx=517, majf=0, minf=38 00:30:56.001 IO depths : 1=0.2%, 2=0.4%, 4=7.0%, 8=78.2%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 complete : 0=0.0%, 4=90.0%, 8=6.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 issued rwts: total=6085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.001 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.001 filename0: (groupid=0, jobs=1): err= 0: pid=804238: Wed Jul 24 17:56:15 2024 00:30:56.001 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.6MiB/10006msec) 00:30:56.001 slat (nsec): min=6093, max=71458, avg=22748.62, stdev=13972.49 00:30:56.001 clat (usec): min=9748, max=54063, avg=26354.28, stdev=5373.26 00:30:56.001 lat (usec): min=9755, max=54080, avg=26377.03, stdev=5372.20 00:30:56.001 clat percentiles (usec): 00:30:56.001 | 1.00th=[12911], 5.00th=[17957], 10.00th=[22152], 20.00th=[23462], 00:30:56.001 | 30.00th=[23987], 40.00th=[24249], 50.00th=[25035], 60.00th=[25560], 00:30:56.001 | 70.00th=[27132], 80.00th=[31327], 90.00th=[33424], 95.00th=[35390], 00:30:56.001 | 99.00th=[42730], 99.50th=[47449], 99.90th=[49546], 99.95th=[53740], 00:30:56.001 | 99.99th=[54264] 00:30:56.001 bw ( KiB/s): min= 2160, max= 2656, per=4.20%, avg=2413.68, stdev=145.77, samples=19 00:30:56.001 iops : min= 540, max= 664, avg=603.42, stdev=36.44, samples=19 00:30:56.001 lat (msec) : 10=0.03%, 20=7.05%, 50=92.83%, 100=0.08% 00:30:56.001 cpu : usr=98.70%, sys=0.89%, ctx=17, majf=0, minf=49 00:30:56.001 IO depths : 1=0.8%, 2=2.5%, 4=11.9%, 8=72.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.001 issued rwts: total=6042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.001 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.001 filename0: (groupid=0, jobs=1): err= 0: pid=804239: Wed Jul 24 17:56:15 2024 00:30:56.001 read: IOPS=564, BW=2260KiB/s (2314kB/s)(22.1MiB/10016msec) 00:30:56.001 slat (nsec): min=6715, max=72692, avg=15477.32, stdev=10145.34 00:30:56.001 clat (usec): min=10913, max=52481, avg=28235.43, stdev=5864.24 00:30:56.001 lat (usec): min=10925, max=52489, avg=28250.91, stdev=5864.62 00:30:56.001 clat percentiles (usec): 00:30:56.001 | 1.00th=[14746], 5.00th=[20579], 10.00th=[22938], 20.00th=[23987], 00:30:56.001 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[29754], 00:30:56.001 | 70.00th=[31327], 80.00th=[32900], 90.00th=[35390], 95.00th=[38011], 00:30:56.001 | 99.00th=[47973], 99.50th=[49021], 99.90th=[51643], 99.95th=[52691], 00:30:56.001 | 99.99th=[52691] 00:30:56.001 bw ( KiB/s): min= 2072, max= 2488, per=3.93%, avg=2257.20, stdev=128.95, samples=20 00:30:56.002 iops : min= 518, max= 622, avg=564.30, stdev=32.24, samples=20 00:30:56.002 lat (msec) : 20=4.52%, 50=95.05%, 100=0.42% 00:30:56.002 cpu : usr=98.64%, sys=0.95%, ctx=18, majf=0, minf=50 00:30:56.002 IO depths : 1=0.4%, 2=1.1%, 4=7.3%, 8=77.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename0: (groupid=0, jobs=1): err= 0: pid=804240: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=566, BW=2268KiB/s (2322kB/s)(22.2MiB/10003msec) 00:30:56.002 slat (nsec): min=4792, max=73411, avg=21545.47, stdev=13613.12 00:30:56.002 clat (usec): min=2897, max=54860, avg=28111.09, stdev=5850.42 00:30:56.002 lat (usec): min=2903, max=54874, avg=28132.64, stdev=5849.97 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[13829], 5.00th=[20055], 10.00th=[22938], 20.00th=[23987], 00:30:56.002 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26608], 60.00th=[30016], 00:30:56.002 | 70.00th=[31065], 80.00th=[32900], 90.00th=[34866], 95.00th=[37487], 00:30:56.002 | 99.00th=[46400], 99.50th=[48497], 99.90th=[50070], 99.95th=[54789], 00:30:56.002 | 99.99th=[54789] 00:30:56.002 bw ( KiB/s): min= 1912, max= 2544, per=3.92%, avg=2254.74, stdev=156.05, samples=19 00:30:56.002 iops : min= 478, max= 636, avg=563.68, stdev=39.01, samples=19 00:30:56.002 lat (msec) : 4=0.11%, 10=0.39%, 20=4.36%, 50=94.89%, 100=0.26% 00:30:56.002 cpu : usr=98.88%, sys=0.72%, ctx=16, majf=0, minf=59 00:30:56.002 IO depths : 1=0.1%, 2=0.4%, 4=6.5%, 8=79.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename0: (groupid=0, jobs=1): err= 0: pid=804241: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=597, BW=2391KiB/s (2448kB/s)(23.4MiB/10016msec) 00:30:56.002 slat (nsec): min=6499, max=97142, avg=29487.10, stdev=17626.85 00:30:56.002 clat (usec): min=10267, max=47460, avg=26595.49, stdev=4943.79 00:30:56.002 lat (usec): min=10275, max=47470, avg=26624.98, stdev=4942.18 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[15008], 5.00th=[21365], 10.00th=[22676], 20.00th=[23462], 00:30:56.002 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:30:56.002 | 70.00th=[28181], 80.00th=[31327], 90.00th=[33817], 95.00th=[35914], 00:30:56.002 | 99.00th=[40109], 99.50th=[42206], 99.90th=[46924], 99.95th=[47449], 00:30:56.002 | 99.99th=[47449] 00:30:56.002 bw ( KiB/s): min= 1920, max= 2640, per=4.16%, avg=2388.40, stdev=162.58, samples=20 00:30:56.002 iops : min= 480, max= 660, avg=597.10, stdev=40.64, samples=20 00:30:56.002 lat (msec) : 20=4.01%, 50=95.99% 00:30:56.002 cpu : usr=98.78%, sys=0.80%, ctx=113, majf=0, minf=65 00:30:56.002 IO depths : 1=0.6%, 2=1.5%, 4=8.7%, 8=75.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename0: (groupid=0, jobs=1): err= 0: pid=804242: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=577, BW=2309KiB/s (2364kB/s)(22.6MiB/10001msec) 00:30:56.002 slat (nsec): min=5946, max=72718, avg=20978.72, stdev=13499.11 00:30:56.002 clat (usec): min=10414, max=66177, avg=27602.95, stdev=5427.92 00:30:56.002 lat (usec): min=10421, max=66201, avg=27623.93, stdev=5426.75 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[14353], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:30:56.002 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[27919], 00:30:56.002 | 70.00th=[30802], 80.00th=[32375], 90.00th=[34341], 95.00th=[35914], 00:30:56.002 | 99.00th=[42730], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:30:56.002 | 99.99th=[66323] 00:30:56.002 bw ( KiB/s): min= 2144, max= 2480, per=4.01%, avg=2306.74, stdev=83.44, samples=19 00:30:56.002 iops : min= 536, max= 620, avg=576.68, stdev=20.86, samples=19 00:30:56.002 lat (msec) : 20=3.98%, 50=95.65%, 100=0.36% 00:30:56.002 cpu : usr=98.77%, sys=0.83%, ctx=18, majf=0, minf=34 00:30:56.002 IO depths : 1=0.4%, 2=1.1%, 4=8.5%, 8=76.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename1: (groupid=0, jobs=1): err= 0: pid=804243: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10011msec) 00:30:56.002 slat (nsec): min=6807, max=70934, avg=19707.53, stdev=12877.12 00:30:56.002 clat (usec): min=10704, max=49391, avg=26723.88, stdev=5081.32 00:30:56.002 lat (usec): min=10712, max=49399, avg=26743.59, stdev=5081.04 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[14615], 5.00th=[21103], 10.00th=[22676], 20.00th=[23462], 00:30:56.002 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:30:56.002 | 70.00th=[28443], 80.00th=[31065], 90.00th=[33162], 95.00th=[35390], 00:30:56.002 | 99.00th=[44303], 99.50th=[47449], 99.90th=[49021], 99.95th=[49546], 00:30:56.002 | 99.99th=[49546] 00:30:56.002 bw ( KiB/s): min= 1976, max= 2576, per=4.15%, avg=2382.00, stdev=145.37, samples=20 00:30:56.002 iops : min= 494, max= 644, avg=595.50, stdev=36.34, samples=20 00:30:56.002 lat (msec) : 20=3.90%, 50=96.10% 00:30:56.002 cpu : usr=98.67%, sys=0.85%, ctx=18, majf=0, minf=43 00:30:56.002 IO depths : 1=0.2%, 2=0.7%, 4=7.0%, 8=78.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename1: (groupid=0, jobs=1): err= 0: pid=804244: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10009msec) 00:30:56.002 slat (usec): min=5, max=818, avg=29.48, stdev=23.78 00:30:56.002 clat (usec): min=9537, max=51710, avg=26793.20, stdev=5068.43 00:30:56.002 lat (usec): min=9551, max=51727, avg=26822.68, stdev=5067.09 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[14877], 5.00th=[21103], 10.00th=[22676], 20.00th=[23462], 00:30:56.002 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25822], 00:30:56.002 | 70.00th=[29230], 80.00th=[31589], 90.00th=[33817], 95.00th=[35390], 00:30:56.002 | 99.00th=[40633], 99.50th=[43254], 99.90th=[49546], 99.95th=[51643], 00:30:56.002 | 99.99th=[51643] 00:30:56.002 bw ( KiB/s): min= 2176, max= 2504, per=4.12%, avg=2365.89, stdev=82.96, samples=19 00:30:56.002 iops : min= 544, max= 626, avg=591.47, stdev=20.74, samples=19 00:30:56.002 lat (msec) : 10=0.10%, 20=4.33%, 50=95.49%, 100=0.08% 00:30:56.002 cpu : usr=93.56%, sys=2.75%, ctx=199, majf=0, minf=37 00:30:56.002 IO depths : 1=0.1%, 2=0.4%, 4=7.4%, 8=77.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename1: (groupid=0, jobs=1): err= 0: pid=804245: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=580, BW=2322KiB/s (2377kB/s)(22.7MiB/10010msec) 00:30:56.002 slat (nsec): min=5965, max=83521, avg=16089.75, stdev=12294.06 00:30:56.002 clat (usec): min=11423, max=65631, avg=27469.60, stdev=5854.51 00:30:56.002 lat (usec): min=11430, max=65647, avg=27485.69, stdev=5854.91 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[15139], 5.00th=[17433], 10.00th=[21627], 20.00th=[23725], 00:30:56.002 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[28967], 00:30:56.002 | 70.00th=[30802], 80.00th=[32375], 90.00th=[34341], 95.00th=[36963], 00:30:56.002 | 99.00th=[43254], 99.50th=[47449], 99.90th=[55313], 99.95th=[65274], 00:30:56.002 | 99.99th=[65799] 00:30:56.002 bw ( KiB/s): min= 2000, max= 2808, per=4.05%, avg=2324.42, stdev=183.00, samples=19 00:30:56.002 iops : min= 500, max= 702, avg=581.11, stdev=45.75, samples=19 00:30:56.002 lat (msec) : 20=8.45%, 50=91.27%, 100=0.28% 00:30:56.002 cpu : usr=98.78%, sys=0.76%, ctx=17, majf=0, minf=53 00:30:56.002 IO depths : 1=0.2%, 2=0.8%, 4=6.6%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.4%, 8=6.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=5810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.002 filename1: (groupid=0, jobs=1): err= 0: pid=804246: Wed Jul 24 17:56:15 2024 00:30:56.002 read: IOPS=605, BW=2420KiB/s (2478kB/s)(23.7MiB/10011msec) 00:30:56.002 slat (nsec): min=6789, max=71503, avg=22057.21, stdev=13668.83 00:30:56.002 clat (usec): min=9896, max=49394, avg=26308.89, stdev=4774.77 00:30:56.002 lat (usec): min=9904, max=49409, avg=26330.95, stdev=4774.36 00:30:56.002 clat percentiles (usec): 00:30:56.002 | 1.00th=[15401], 5.00th=[20317], 10.00th=[22676], 20.00th=[23462], 00:30:56.002 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:30:56.002 | 70.00th=[26346], 80.00th=[30802], 90.00th=[33162], 95.00th=[34341], 00:30:56.002 | 99.00th=[41157], 99.50th=[44827], 99.90th=[47973], 99.95th=[49021], 00:30:56.002 | 99.99th=[49546] 00:30:56.002 bw ( KiB/s): min= 2176, max= 2688, per=4.21%, avg=2416.40, stdev=135.75, samples=20 00:30:56.002 iops : min= 544, max= 672, avg=604.10, stdev=33.94, samples=20 00:30:56.002 lat (msec) : 10=0.07%, 20=4.71%, 50=95.23% 00:30:56.002 cpu : usr=98.71%, sys=0.88%, ctx=16, majf=0, minf=48 00:30:56.002 IO depths : 1=1.2%, 2=2.5%, 4=10.2%, 8=73.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.002 issued rwts: total=6057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename1: (groupid=0, jobs=1): err= 0: pid=804247: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=550, BW=2203KiB/s (2255kB/s)(21.5MiB/10003msec) 00:30:56.003 slat (nsec): min=6736, max=71533, avg=15202.47, stdev=11891.50 00:30:56.003 clat (usec): min=4863, max=62265, avg=28979.47, stdev=6179.46 00:30:56.003 lat (usec): min=4877, max=62282, avg=28994.67, stdev=6180.17 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[14353], 5.00th=[20579], 10.00th=[22938], 20.00th=[24249], 00:30:56.003 | 30.00th=[25297], 40.00th=[26346], 50.00th=[28705], 60.00th=[30802], 00:30:56.003 | 70.00th=[32113], 80.00th=[33424], 90.00th=[35390], 95.00th=[39060], 00:30:56.003 | 99.00th=[48497], 99.50th=[50594], 99.90th=[62129], 99.95th=[62129], 00:30:56.003 | 99.99th=[62129] 00:30:56.003 bw ( KiB/s): min= 1896, max= 2328, per=3.80%, avg=2183.16, stdev=115.84, samples=19 00:30:56.003 iops : min= 474, max= 582, avg=545.79, stdev=28.96, samples=19 00:30:56.003 lat (msec) : 10=0.47%, 20=3.69%, 50=95.33%, 100=0.51% 00:30:56.003 cpu : usr=98.89%, sys=0.71%, ctx=15, majf=0, minf=42 00:30:56.003 IO depths : 1=0.1%, 2=0.5%, 4=6.6%, 8=77.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.3%, 8=6.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=5508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename1: (groupid=0, jobs=1): err= 0: pid=804248: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10004msec) 00:30:56.003 slat (nsec): min=4731, max=70491, avg=21497.81, stdev=13373.98 00:30:56.003 clat (usec): min=5551, max=52029, avg=26844.89, stdev=5001.96 00:30:56.003 lat (usec): min=5558, max=52045, avg=26866.38, stdev=5001.92 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[14222], 5.00th=[21103], 10.00th=[22938], 20.00th=[23725], 00:30:56.003 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:30:56.003 | 70.00th=[29754], 80.00th=[31589], 90.00th=[33424], 95.00th=[34866], 00:30:56.003 | 99.00th=[41157], 99.50th=[42730], 99.90th=[46924], 99.95th=[52167], 00:30:56.003 | 99.99th=[52167] 00:30:56.003 bw ( KiB/s): min= 2080, max= 2608, per=4.11%, avg=2360.21, stdev=106.40, samples=19 00:30:56.003 iops : min= 520, max= 652, avg=590.05, stdev=26.60, samples=19 00:30:56.003 lat (msec) : 10=0.10%, 20=4.16%, 50=95.67%, 100=0.07% 00:30:56.003 cpu : usr=98.73%, sys=0.86%, ctx=19, majf=0, minf=75 00:30:56.003 IO depths : 1=0.3%, 2=0.8%, 4=8.5%, 8=77.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=5937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename1: (groupid=0, jobs=1): err= 0: pid=804249: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=620, BW=2483KiB/s (2542kB/s)(24.3MiB/10011msec) 00:30:56.003 slat (nsec): min=6785, max=82726, avg=15312.42, stdev=9861.80 00:30:56.003 clat (usec): min=11160, max=49266, avg=25684.55, stdev=4465.97 00:30:56.003 lat (usec): min=11169, max=49275, avg=25699.86, stdev=4466.65 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[15139], 5.00th=[19268], 10.00th=[22414], 20.00th=[23462], 00:30:56.003 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:30:56.003 | 70.00th=[25560], 80.00th=[29230], 90.00th=[32637], 95.00th=[34341], 00:30:56.003 | 99.00th=[40109], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:30:56.003 | 99.99th=[49021] 00:30:56.003 bw ( KiB/s): min= 2224, max= 2672, per=4.32%, avg=2479.20, stdev=119.59, samples=20 00:30:56.003 iops : min= 556, max= 668, avg=619.80, stdev=29.90, samples=20 00:30:56.003 lat (msec) : 20=5.62%, 50=94.38% 00:30:56.003 cpu : usr=98.57%, sys=0.98%, ctx=16, majf=0, minf=54 00:30:56.003 IO depths : 1=0.7%, 2=1.5%, 4=8.0%, 8=76.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=6214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename1: (groupid=0, jobs=1): err= 0: pid=804250: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=575, BW=2304KiB/s (2359kB/s)(22.5MiB/10016msec) 00:30:56.003 slat (nsec): min=6783, max=81294, avg=19398.85, stdev=12647.76 00:30:56.003 clat (usec): min=10734, max=50225, avg=27663.78, stdev=5501.73 00:30:56.003 lat (usec): min=10747, max=50240, avg=27683.18, stdev=5500.81 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[16057], 5.00th=[21365], 10.00th=[22938], 20.00th=[23725], 00:30:56.003 | 30.00th=[24249], 40.00th=[25035], 50.00th=[25560], 60.00th=[27919], 00:30:56.003 | 70.00th=[30540], 80.00th=[32375], 90.00th=[34341], 95.00th=[36439], 00:30:56.003 | 99.00th=[47449], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:30:56.003 | 99.99th=[50070] 00:30:56.003 bw ( KiB/s): min= 2016, max= 2560, per=4.01%, avg=2301.20, stdev=155.87, samples=20 00:30:56.003 iops : min= 504, max= 640, avg=575.30, stdev=38.97, samples=20 00:30:56.003 lat (msec) : 20=3.59%, 50=96.27%, 100=0.14% 00:30:56.003 cpu : usr=98.63%, sys=0.88%, ctx=17, majf=0, minf=53 00:30:56.003 IO depths : 1=0.2%, 2=0.6%, 4=7.5%, 8=77.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=5769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename2: (groupid=0, jobs=1): err= 0: pid=804251: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=613, BW=2453KiB/s (2512kB/s)(24.0MiB/10016msec) 00:30:56.003 slat (nsec): min=6784, max=73118, avg=19262.98, stdev=12699.33 00:30:56.003 clat (usec): min=10906, max=48988, avg=25969.56, stdev=4358.54 00:30:56.003 lat (usec): min=10914, max=48996, avg=25988.82, stdev=4358.40 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[15533], 5.00th=[21365], 10.00th=[22938], 20.00th=[23462], 00:30:56.003 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:30:56.003 | 70.00th=[25822], 80.00th=[28967], 90.00th=[32375], 95.00th=[34341], 00:30:56.003 | 99.00th=[40109], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:30:56.003 | 99.99th=[49021] 00:30:56.003 bw ( KiB/s): min= 2224, max= 2648, per=4.27%, avg=2450.80, stdev=106.20, samples=20 00:30:56.003 iops : min= 556, max= 662, avg=612.70, stdev=26.55, samples=20 00:30:56.003 lat (msec) : 20=4.23%, 50=95.77% 00:30:56.003 cpu : usr=98.56%, sys=0.98%, ctx=18, majf=0, minf=48 00:30:56.003 IO depths : 1=0.4%, 2=1.3%, 4=9.5%, 8=75.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=6143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename2: (groupid=0, jobs=1): err= 0: pid=804252: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=577, BW=2310KiB/s (2366kB/s)(22.6MiB/10002msec) 00:30:56.003 slat (nsec): min=5048, max=79709, avg=22369.58, stdev=14026.95 00:30:56.003 clat (usec): min=5017, max=62105, avg=27582.58, stdev=5905.50 00:30:56.003 lat (usec): min=5029, max=62119, avg=27604.95, stdev=5904.39 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[13829], 5.00th=[21103], 10.00th=[22938], 20.00th=[23987], 00:30:56.003 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[27657], 00:30:56.003 | 70.00th=[30802], 80.00th=[32375], 90.00th=[34341], 95.00th=[36963], 00:30:56.003 | 99.00th=[46924], 99.50th=[49546], 99.90th=[56361], 99.95th=[56361], 00:30:56.003 | 99.99th=[62129] 00:30:56.003 bw ( KiB/s): min= 1992, max= 2480, per=3.99%, avg=2292.63, stdev=114.49, samples=19 00:30:56.003 iops : min= 498, max= 620, avg=573.16, stdev=28.62, samples=19 00:30:56.003 lat (msec) : 10=0.52%, 20=3.93%, 50=95.12%, 100=0.43% 00:30:56.003 cpu : usr=98.73%, sys=0.86%, ctx=18, majf=0, minf=57 00:30:56.003 IO depths : 1=0.2%, 2=0.6%, 4=8.3%, 8=77.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=5777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename2: (groupid=0, jobs=1): err= 0: pid=804253: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=605, BW=2422KiB/s (2480kB/s)(23.7MiB/10016msec) 00:30:56.003 slat (nsec): min=6720, max=87017, avg=17189.70, stdev=11494.60 00:30:56.003 clat (usec): min=11592, max=51255, avg=26316.25, stdev=4961.40 00:30:56.003 lat (usec): min=11600, max=51272, avg=26333.44, stdev=4962.33 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[15664], 5.00th=[18744], 10.00th=[22414], 20.00th=[23200], 00:30:56.003 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:30:56.003 | 70.00th=[26346], 80.00th=[30802], 90.00th=[33424], 95.00th=[35914], 00:30:56.003 | 99.00th=[39584], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:30:56.003 | 99.99th=[51119] 00:30:56.003 bw ( KiB/s): min= 2272, max= 2640, per=4.21%, avg=2419.60, stdev=92.73, samples=20 00:30:56.003 iops : min= 568, max= 660, avg=604.90, stdev=23.18, samples=20 00:30:56.003 lat (msec) : 20=5.67%, 50=94.28%, 100=0.05% 00:30:56.003 cpu : usr=98.67%, sys=0.91%, ctx=19, majf=0, minf=52 00:30:56.003 IO depths : 1=0.6%, 2=1.3%, 4=8.5%, 8=76.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:56.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.003 issued rwts: total=6065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.003 filename2: (groupid=0, jobs=1): err= 0: pid=804254: Wed Jul 24 17:56:15 2024 00:30:56.003 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10001msec) 00:30:56.003 slat (nsec): min=6116, max=70244, avg=21222.98, stdev=13400.24 00:30:56.003 clat (usec): min=5398, max=61676, avg=28200.26, stdev=5828.30 00:30:56.003 lat (usec): min=5411, max=61693, avg=28221.48, stdev=5827.30 00:30:56.003 clat percentiles (usec): 00:30:56.003 | 1.00th=[14222], 5.00th=[21890], 10.00th=[22938], 20.00th=[23987], 00:30:56.003 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26346], 60.00th=[29754], 00:30:56.004 | 70.00th=[31327], 80.00th=[32900], 90.00th=[34866], 95.00th=[36963], 00:30:56.004 | 99.00th=[47973], 99.50th=[48497], 99.90th=[55837], 99.95th=[61604], 00:30:56.004 | 99.99th=[61604] 00:30:56.004 bw ( KiB/s): min= 1920, max= 2408, per=3.91%, avg=2247.58, stdev=105.91, samples=19 00:30:56.004 iops : min= 480, max= 602, avg=561.89, stdev=26.48, samples=19 00:30:56.004 lat (msec) : 10=0.34%, 20=3.68%, 50=95.67%, 100=0.32% 00:30:56.004 cpu : usr=98.73%, sys=0.87%, ctx=16, majf=0, minf=38 00:30:56.004 IO depths : 1=0.1%, 2=0.5%, 4=8.0%, 8=77.7%, 16=13.7%, 32=0.0%, >=64=0.0% 00:30:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 issued rwts: total=5652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.004 filename2: (groupid=0, jobs=1): err= 0: pid=804255: Wed Jul 24 17:56:15 2024 00:30:56.004 read: IOPS=583, BW=2335KiB/s (2391kB/s)(22.8MiB/10005msec) 00:30:56.004 slat (nsec): min=4192, max=70902, avg=21160.24, stdev=13397.09 00:30:56.004 clat (usec): min=8792, max=60409, avg=27288.91, stdev=5552.45 00:30:56.004 lat (usec): min=8798, max=60422, avg=27310.07, stdev=5551.82 00:30:56.004 clat percentiles (usec): 00:30:56.004 | 1.00th=[13829], 5.00th=[19530], 10.00th=[22938], 20.00th=[23725], 00:30:56.004 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[26870], 00:30:56.004 | 70.00th=[30540], 80.00th=[32113], 90.00th=[33817], 95.00th=[35914], 00:30:56.004 | 99.00th=[41681], 99.50th=[47449], 99.90th=[60556], 99.95th=[60556], 00:30:56.004 | 99.99th=[60556] 00:30:56.004 bw ( KiB/s): min= 2128, max= 2456, per=4.05%, avg=2325.89, stdev=100.73, samples=19 00:30:56.004 iops : min= 532, max= 614, avg=581.47, stdev=25.18, samples=19 00:30:56.004 lat (msec) : 10=0.22%, 20=5.07%, 50=94.44%, 100=0.27% 00:30:56.004 cpu : usr=98.77%, sys=0.83%, ctx=19, majf=0, minf=53 00:30:56.004 IO depths : 1=0.2%, 2=0.7%, 4=8.2%, 8=77.5%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 issued rwts: total=5841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.004 filename2: (groupid=0, jobs=1): err= 0: pid=804256: Wed Jul 24 17:56:15 2024 00:30:56.004 read: IOPS=589, BW=2359KiB/s (2415kB/s)(23.0MiB/10006msec) 00:30:56.004 slat (nsec): min=6046, max=70581, avg=15591.61, stdev=11999.19 00:30:56.004 clat (usec): min=10867, max=55119, avg=27045.08, stdev=5275.75 00:30:56.004 lat (usec): min=10876, max=55135, avg=27060.67, stdev=5276.01 00:30:56.004 clat percentiles (usec): 00:30:56.004 | 1.00th=[15139], 5.00th=[19792], 10.00th=[22414], 20.00th=[23725], 00:30:56.004 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[26608], 00:30:56.004 | 70.00th=[29230], 80.00th=[31851], 90.00th=[33817], 95.00th=[35914], 00:30:56.004 | 99.00th=[43254], 99.50th=[44827], 99.90th=[50594], 99.95th=[54789], 00:30:56.004 | 99.99th=[55313] 00:30:56.004 bw ( KiB/s): min= 2128, max= 2608, per=4.10%, avg=2356.21, stdev=131.79, samples=19 00:30:56.004 iops : min= 532, max= 652, avg=589.05, stdev=32.95, samples=19 00:30:56.004 lat (msec) : 20=5.24%, 50=94.49%, 100=0.27% 00:30:56.004 cpu : usr=98.58%, sys=0.97%, ctx=16, majf=0, minf=49 00:30:56.004 IO depths : 1=0.1%, 2=0.5%, 4=6.4%, 8=78.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 complete : 0=0.0%, 4=90.1%, 8=6.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 issued rwts: total=5900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.004 filename2: (groupid=0, jobs=1): err= 0: pid=804257: Wed Jul 24 17:56:15 2024 00:30:56.004 read: IOPS=600, BW=2401KiB/s (2458kB/s)(23.5MiB/10026msec) 00:30:56.004 slat (nsec): min=3824, max=74353, avg=21117.86, stdev=13473.56 00:30:56.004 clat (usec): min=3945, max=51783, avg=26537.61, stdev=5469.41 00:30:56.004 lat (usec): min=3953, max=51791, avg=26558.73, stdev=5469.72 00:30:56.004 clat percentiles (usec): 00:30:56.004 | 1.00th=[10814], 5.00th=[18744], 10.00th=[22676], 20.00th=[23462], 00:30:56.004 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:30:56.004 | 70.00th=[28443], 80.00th=[31589], 90.00th=[33424], 95.00th=[35390], 00:30:56.004 | 99.00th=[41157], 99.50th=[45876], 99.90th=[49021], 99.95th=[50070], 00:30:56.004 | 99.99th=[51643] 00:30:56.004 bw ( KiB/s): min= 2224, max= 2688, per=4.18%, avg=2400.40, stdev=99.53, samples=20 00:30:56.004 iops : min= 556, max= 672, avg=600.10, stdev=24.88, samples=20 00:30:56.004 lat (msec) : 4=0.12%, 10=0.85%, 20=4.77%, 50=94.22%, 100=0.05% 00:30:56.004 cpu : usr=98.66%, sys=0.92%, ctx=21, majf=0, minf=48 00:30:56.004 IO depths : 1=0.4%, 2=0.9%, 4=7.5%, 8=78.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 issued rwts: total=6017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.004 filename2: (groupid=0, jobs=1): err= 0: pid=804258: Wed Jul 24 17:56:15 2024 00:30:56.004 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10024msec) 00:30:56.004 slat (nsec): min=4160, max=72335, avg=20107.82, stdev=13531.85 00:30:56.004 clat (usec): min=5825, max=49820, avg=28053.45, stdev=5567.24 00:30:56.004 lat (usec): min=5832, max=49835, avg=28073.56, stdev=5566.92 00:30:56.004 clat percentiles (usec): 00:30:56.004 | 1.00th=[13829], 5.00th=[21103], 10.00th=[22938], 20.00th=[23987], 00:30:56.004 | 30.00th=[24511], 40.00th=[25297], 50.00th=[26346], 60.00th=[29230], 00:30:56.004 | 70.00th=[31327], 80.00th=[32900], 90.00th=[34866], 95.00th=[36439], 00:30:56.004 | 99.00th=[43779], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:30:56.004 | 99.99th=[50070] 00:30:56.004 bw ( KiB/s): min= 2128, max= 2565, per=3.95%, avg=2270.65, stdev=116.18, samples=20 00:30:56.004 iops : min= 532, max= 641, avg=567.65, stdev=29.01, samples=20 00:30:56.004 lat (msec) : 10=0.54%, 20=3.36%, 50=96.10% 00:30:56.004 cpu : usr=98.73%, sys=0.83%, ctx=20, majf=0, minf=38 00:30:56.004 IO depths : 1=0.2%, 2=0.8%, 4=7.1%, 8=77.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:56.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 complete : 0=0.0%, 4=90.2%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.004 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:56.004 00:30:56.004 Run status group 0 (all jobs): 00:30:56.004 READ: bw=56.1MiB/s (58.8MB/s), 2203KiB/s-3124KiB/s (2255kB/s-3199kB/s), io=562MiB (590MB), run=10001-10026msec 00:30:56.004 17:56:15 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:56.004 17:56:15 -- target/dif.sh@43 -- # local sub 00:30:56.004 17:56:15 -- target/dif.sh@45 -- # for sub in "$@" 00:30:56.004 17:56:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:56.004 17:56:15 -- target/dif.sh@36 -- # local sub_id=0 00:30:56.004 17:56:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@45 -- # for sub in "$@" 00:30:56.004 17:56:15 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:56.004 17:56:15 -- target/dif.sh@36 -- # local sub_id=1 00:30:56.004 17:56:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@45 -- # for sub in "$@" 00:30:56.004 17:56:15 -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:56.004 17:56:15 -- target/dif.sh@36 -- # local sub_id=2 00:30:56.004 17:56:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # NULL_DIF=1 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # numjobs=2 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # iodepth=8 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # runtime=5 00:30:56.004 17:56:15 -- target/dif.sh@115 -- # files=1 00:30:56.004 17:56:15 -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:56.004 17:56:15 -- target/dif.sh@28 -- # local sub 00:30:56.004 17:56:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.004 17:56:15 -- target/dif.sh@31 -- # create_subsystem 0 00:30:56.004 17:56:15 -- target/dif.sh@18 -- # local sub_id=0 00:30:56.004 17:56:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 bdev_null0 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.004 17:56:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:56.004 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.004 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.004 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.005 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.005 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.005 [2024-07-24 17:56:15.676750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.005 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.005 17:56:15 -- target/dif.sh@31 -- # create_subsystem 1 00:30:56.005 17:56:15 -- target/dif.sh@18 -- # local sub_id=1 00:30:56.005 17:56:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:56.005 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.005 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.005 bdev_null1 00:30:56.005 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:56.005 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.005 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.005 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:56.005 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.005 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.005 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.005 17:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.005 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.005 17:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.005 17:56:15 -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:56.005 17:56:15 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:56.005 17:56:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:56.005 17:56:15 -- nvmf/common.sh@520 -- # config=() 00:30:56.005 17:56:15 -- nvmf/common.sh@520 -- # local subsystem config 00:30:56.005 17:56:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:56.005 17:56:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:56.005 { 00:30:56.005 "params": { 00:30:56.005 "name": "Nvme$subsystem", 00:30:56.005 "trtype": "$TEST_TRANSPORT", 00:30:56.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.005 "adrfam": "ipv4", 00:30:56.005 "trsvcid": "$NVMF_PORT", 00:30:56.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.005 "hdgst": ${hdgst:-false}, 00:30:56.005 "ddgst": ${ddgst:-false} 00:30:56.005 }, 00:30:56.005 "method": "bdev_nvme_attach_controller" 00:30:56.005 } 00:30:56.005 EOF 00:30:56.005 )") 00:30:56.005 17:56:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.005 17:56:15 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.005 17:56:15 -- target/dif.sh@82 -- # gen_fio_conf 00:30:56.005 17:56:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:56.005 17:56:15 -- target/dif.sh@54 -- # local file 00:30:56.005 17:56:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.005 17:56:15 -- target/dif.sh@56 -- # cat 00:30:56.005 17:56:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:56.005 17:56:15 -- nvmf/common.sh@542 -- # cat 00:30:56.005 17:56:15 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.005 17:56:15 -- common/autotest_common.sh@1320 -- # shift 00:30:56.005 17:56:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:56.005 17:56:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.005 17:56:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:56.005 17:56:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.005 17:56:15 -- target/dif.sh@73 -- # cat 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:56.005 17:56:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:56.005 17:56:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:56.005 { 00:30:56.005 "params": { 00:30:56.005 "name": "Nvme$subsystem", 00:30:56.005 "trtype": "$TEST_TRANSPORT", 00:30:56.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.005 "adrfam": "ipv4", 00:30:56.005 "trsvcid": "$NVMF_PORT", 00:30:56.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.005 "hdgst": ${hdgst:-false}, 00:30:56.005 "ddgst": ${ddgst:-false} 00:30:56.005 }, 00:30:56.005 "method": "bdev_nvme_attach_controller" 00:30:56.005 } 00:30:56.005 EOF 00:30:56.005 )") 00:30:56.005 17:56:15 -- target/dif.sh@72 -- # (( file++ )) 00:30:56.005 17:56:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.005 17:56:15 -- nvmf/common.sh@542 -- # cat 00:30:56.005 17:56:15 -- nvmf/common.sh@544 -- # jq . 00:30:56.005 17:56:15 -- nvmf/common.sh@545 -- # IFS=, 00:30:56.005 17:56:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:56.005 "params": { 00:30:56.005 "name": "Nvme0", 00:30:56.005 "trtype": "tcp", 00:30:56.005 "traddr": "10.0.0.2", 00:30:56.005 "adrfam": "ipv4", 00:30:56.005 "trsvcid": "4420", 00:30:56.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:56.005 "hdgst": false, 00:30:56.005 "ddgst": false 00:30:56.005 }, 00:30:56.005 "method": "bdev_nvme_attach_controller" 00:30:56.005 },{ 00:30:56.005 "params": { 00:30:56.005 "name": "Nvme1", 00:30:56.005 "trtype": "tcp", 00:30:56.005 "traddr": "10.0.0.2", 00:30:56.005 "adrfam": "ipv4", 00:30:56.005 "trsvcid": "4420", 00:30:56.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:56.005 "hdgst": false, 00:30:56.005 "ddgst": false 00:30:56.005 }, 00:30:56.005 "method": "bdev_nvme_attach_controller" 00:30:56.005 }' 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:56.005 17:56:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:56.005 17:56:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:56.005 17:56:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:56.005 17:56:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:56.005 17:56:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:56.005 17:56:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.005 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:56.005 ... 00:30:56.005 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:56.005 ... 00:30:56.005 fio-3.35 00:30:56.005 Starting 4 threads 00:30:56.005 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.005 [2024-07-24 17:56:16.565876] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:56.005 [2024-07-24 17:56:16.565930] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:00.196 00:31:00.196 filename0: (groupid=0, jobs=1): err= 0: pid=806092: Wed Jul 24 17:56:21 2024 00:31:00.196 read: IOPS=2848, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:31:00.196 slat (nsec): min=2834, max=55436, avg=8336.29, stdev=2452.04 00:31:00.196 clat (usec): min=1378, max=5456, avg=2787.61, stdev=442.68 00:31:00.196 lat (usec): min=1387, max=5466, avg=2795.95, stdev=442.61 00:31:00.196 clat percentiles (usec): 00:31:00.196 | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2409], 00:31:00.196 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:31:00.196 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3523], 00:31:00.196 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 5145], 00:31:00.196 | 99.99th=[ 5407] 00:31:00.196 bw ( KiB/s): min=22416, max=23072, per=26.98%, avg=22784.00, stdev=218.73, samples=10 00:31:00.196 iops : min= 2802, max= 2884, avg=2848.00, stdev=27.34, samples=10 00:31:00.196 lat (msec) : 2=2.72%, 4=96.50%, 10=0.77% 00:31:00.196 cpu : usr=95.72%, sys=3.94%, ctx=9, majf=0, minf=0 00:31:00.196 IO depths : 1=0.1%, 2=1.2%, 4=66.2%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 issued rwts: total=14246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.196 filename0: (groupid=0, jobs=1): err= 0: pid=806093: Wed Jul 24 17:56:21 2024 00:31:00.196 read: IOPS=2775, BW=21.7MiB/s (22.7MB/s)(108MiB/5003msec) 00:31:00.196 slat (nsec): min=4106, max=24660, avg=8487.80, stdev=2556.10 00:31:00.196 clat (usec): min=1436, max=12915, avg=2860.63, stdev=504.54 00:31:00.196 lat (usec): min=1443, max=12928, avg=2869.12, stdev=504.47 00:31:00.196 clat percentiles (usec): 00:31:00.196 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2474], 00:31:00.196 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2966], 00:31:00.196 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3621], 00:31:00.196 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[12649], 00:31:00.196 | 99.99th=[12911] 00:31:00.196 bw ( KiB/s): min=21744, max=22688, per=26.30%, avg=22206.40, stdev=325.42, samples=10 00:31:00.196 iops : min= 2718, max= 2836, avg=2775.80, stdev=40.68, samples=10 00:31:00.196 lat (msec) : 2=1.92%, 4=96.99%, 10=1.04%, 20=0.06% 00:31:00.196 cpu : usr=96.42%, sys=3.24%, ctx=6, majf=0, minf=0 00:31:00.196 IO depths : 1=0.1%, 2=1.0%, 4=66.4%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 issued rwts: total=13885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.196 filename1: (groupid=0, jobs=1): err= 0: pid=806094: Wed Jul 24 17:56:21 2024 00:31:00.196 read: IOPS=2152, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5004msec) 00:31:00.196 slat (usec): min=6, max=153, avg= 8.52, stdev= 3.00 00:31:00.196 clat (usec): min=1555, max=13049, avg=3693.52, stdev=749.54 00:31:00.196 lat (usec): min=1562, max=13073, avg=3702.05, stdev=749.55 00:31:00.196 clat percentiles (usec): 00:31:00.196 | 1.00th=[ 2278], 5.00th=[ 2671], 10.00th=[ 2900], 20.00th=[ 3130], 00:31:00.196 | 30.00th=[ 3294], 40.00th=[ 3458], 50.00th=[ 3621], 60.00th=[ 3752], 00:31:00.196 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4883], 00:31:00.196 | 99.00th=[ 5669], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[12780], 00:31:00.196 | 99.99th=[13042] 00:31:00.196 bw ( KiB/s): min=16528, max=17808, per=20.40%, avg=17222.40, stdev=399.55, samples=10 00:31:00.196 iops : min= 2066, max= 2226, avg=2152.80, stdev=49.94, samples=10 00:31:00.196 lat (msec) : 2=0.17%, 4=71.38%, 10=28.37%, 20=0.08% 00:31:00.196 cpu : usr=96.46%, sys=3.18%, ctx=7, majf=0, minf=0 00:31:00.196 IO depths : 1=0.1%, 2=1.8%, 4=66.7%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 issued rwts: total=10769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.196 filename1: (groupid=0, jobs=1): err= 0: pid=806095: Wed Jul 24 17:56:21 2024 00:31:00.196 read: IOPS=2782, BW=21.7MiB/s (22.8MB/s)(109MiB/5002msec) 00:31:00.196 slat (nsec): min=6104, max=35893, avg=8540.31, stdev=2653.16 00:31:00.196 clat (usec): min=1459, max=50910, avg=2853.36, stdev=1229.67 00:31:00.196 lat (usec): min=1465, max=50934, avg=2861.90, stdev=1229.72 00:31:00.196 clat percentiles (usec): 00:31:00.196 | 1.00th=[ 1909], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:31:00.196 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2933], 00:31:00.196 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3556], 00:31:00.196 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 4948], 99.95th=[50594], 00:31:00.196 | 99.99th=[51119] 00:31:00.196 bw ( KiB/s): min=19872, max=22848, per=26.36%, avg=22259.20, stdev=858.51, samples=10 00:31:00.196 iops : min= 2484, max= 2856, avg=2782.40, stdev=107.31, samples=10 00:31:00.196 lat (msec) : 2=2.12%, 4=97.24%, 10=0.58%, 100=0.06% 00:31:00.196 cpu : usr=95.54%, sys=4.08%, ctx=7, majf=0, minf=9 00:31:00.196 IO depths : 1=0.1%, 2=1.2%, 4=66.4%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.196 issued rwts: total=13917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.196 00:31:00.196 Run status group 0 (all jobs): 00:31:00.196 READ: bw=82.5MiB/s (86.5MB/s), 16.8MiB/s-22.2MiB/s (17.6MB/s-23.3MB/s), io=413MiB (433MB), run=5002-5004msec 00:31:00.456 17:56:21 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:00.456 17:56:21 -- target/dif.sh@43 -- # local sub 00:31:00.456 17:56:21 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.456 17:56:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:00.456 17:56:21 -- target/dif.sh@36 -- # local sub_id=0 00:31:00.456 17:56:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.456 17:56:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:00.456 17:56:21 -- target/dif.sh@36 -- # local sub_id=1 00:31:00.456 17:56:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 00:31:00.456 real 0m24.183s 00:31:00.456 user 4m50.025s 00:31:00.456 sys 0m4.910s 00:31:00.456 17:56:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 ************************************ 00:31:00.456 END TEST fio_dif_rand_params 00:31:00.456 ************************************ 00:31:00.456 17:56:21 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:00.456 17:56:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:00.456 17:56:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 ************************************ 00:31:00.456 START TEST fio_dif_digest 00:31:00.456 ************************************ 00:31:00.456 17:56:21 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:00.456 17:56:21 -- target/dif.sh@123 -- # local NULL_DIF 00:31:00.456 17:56:21 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:00.456 17:56:21 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:00.456 17:56:21 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:00.456 17:56:21 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:00.456 17:56:21 -- target/dif.sh@127 -- # numjobs=3 00:31:00.456 17:56:21 -- target/dif.sh@127 -- # iodepth=3 00:31:00.456 17:56:21 -- target/dif.sh@127 -- # runtime=10 00:31:00.456 17:56:21 -- target/dif.sh@128 -- # hdgst=true 00:31:00.456 17:56:21 -- target/dif.sh@128 -- # ddgst=true 00:31:00.456 17:56:21 -- target/dif.sh@130 -- # create_subsystems 0 00:31:00.456 17:56:21 -- target/dif.sh@28 -- # local sub 00:31:00.456 17:56:21 -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.456 17:56:21 -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.456 17:56:21 -- target/dif.sh@18 -- # local sub_id=0 00:31:00.456 17:56:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 bdev_null0 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.456 17:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.456 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:31:00.456 [2024-07-24 17:56:21.986500] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.456 17:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.456 17:56:21 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:00.456 17:56:21 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:00.456 17:56:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:00.456 17:56:21 -- nvmf/common.sh@520 -- # config=() 00:31:00.456 17:56:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.456 17:56:21 -- nvmf/common.sh@520 -- # local subsystem config 00:31:00.456 17:56:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.456 17:56:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:00.456 17:56:21 -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.456 17:56:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:00.456 { 00:31:00.456 "params": { 00:31:00.456 "name": "Nvme$subsystem", 00:31:00.456 "trtype": "$TEST_TRANSPORT", 00:31:00.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.456 "adrfam": "ipv4", 00:31:00.456 "trsvcid": "$NVMF_PORT", 00:31:00.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.456 "hdgst": ${hdgst:-false}, 00:31:00.456 "ddgst": ${ddgst:-false} 00:31:00.456 }, 00:31:00.456 "method": "bdev_nvme_attach_controller" 00:31:00.456 } 00:31:00.456 EOF 00:31:00.456 )") 00:31:00.456 17:56:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:00.456 17:56:21 -- target/dif.sh@54 -- # local file 00:31:00.456 17:56:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.456 17:56:21 -- target/dif.sh@56 -- # cat 00:31:00.456 17:56:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:00.456 17:56:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.456 17:56:21 -- common/autotest_common.sh@1320 -- # shift 00:31:00.456 17:56:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:00.456 17:56:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.456 17:56:21 -- nvmf/common.sh@542 -- # cat 00:31:00.456 17:56:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.456 17:56:21 -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.456 17:56:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.456 17:56:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:00.456 17:56:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:00.456 17:56:21 -- nvmf/common.sh@544 -- # jq . 00:31:00.456 17:56:22 -- nvmf/common.sh@545 -- # IFS=, 00:31:00.456 17:56:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:00.456 "params": { 00:31:00.456 "name": "Nvme0", 00:31:00.456 "trtype": "tcp", 00:31:00.456 "traddr": "10.0.0.2", 00:31:00.456 "adrfam": "ipv4", 00:31:00.456 "trsvcid": "4420", 00:31:00.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.456 "hdgst": true, 00:31:00.456 "ddgst": true 00:31:00.456 }, 00:31:00.456 "method": "bdev_nvme_attach_controller" 00:31:00.456 }' 00:31:00.456 17:56:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:00.456 17:56:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:00.456 17:56:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.456 17:56:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.456 17:56:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:00.456 17:56:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:00.456 17:56:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:00.456 17:56:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:00.456 17:56:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:00.456 17:56:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.023 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:01.023 ... 00:31:01.023 fio-3.35 00:31:01.023 Starting 3 threads 00:31:01.023 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.282 [2024-07-24 17:56:22.674135] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:01.282 [2024-07-24 17:56:22.674180] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:11.255 00:31:11.255 filename0: (groupid=0, jobs=1): err= 0: pid=807309: Wed Jul 24 17:56:32 2024 00:31:11.255 read: IOPS=334, BW=41.8MiB/s (43.8MB/s)(420MiB/10047msec) 00:31:11.255 slat (nsec): min=4184, max=46248, avg=10290.40, stdev=2376.66 00:31:11.255 clat (usec): min=4718, max=58031, avg=8944.86, stdev=4048.02 00:31:11.255 lat (usec): min=4726, max=58039, avg=8955.15, stdev=4048.42 00:31:11.255 clat percentiles (usec): 00:31:11.255 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 6849], 00:31:11.255 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:31:11.255 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[12125], 00:31:11.255 | 99.00th=[15008], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:31:11.255 | 99.99th=[57934] 00:31:11.255 bw ( KiB/s): min=27136, max=47872, per=48.64%, avg=42982.40, stdev=5231.89, samples=20 00:31:11.255 iops : min= 212, max= 374, avg=335.80, stdev=40.87, samples=20 00:31:11.255 lat (msec) : 10=73.93%, 20=25.39%, 50=0.12%, 100=0.57% 00:31:11.255 cpu : usr=93.89%, sys=5.65%, ctx=13, majf=0, minf=226 00:31:11.255 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.255 filename0: (groupid=0, jobs=1): err= 0: pid=807310: Wed Jul 24 17:56:32 2024 00:31:11.255 read: IOPS=172, BW=21.6MiB/s (22.6MB/s)(217MiB/10050msec) 00:31:11.255 slat (usec): min=6, max=103, avg=11.32, stdev= 3.02 00:31:11.255 clat (usec): min=5696, max=96937, avg=17338.47, stdev=15608.12 00:31:11.255 lat (usec): min=5704, max=96950, avg=17349.79, stdev=15608.09 00:31:11.255 clat percentiles (usec): 00:31:11.255 | 1.00th=[ 6194], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9896], 00:31:11.255 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:31:11.255 | 70.00th=[13173], 80.00th=[14746], 90.00th=[52167], 95.00th=[54789], 00:31:11.255 | 99.00th=[60031], 99.50th=[93848], 99.90th=[95945], 99.95th=[96994], 00:31:11.255 | 99.99th=[96994] 00:31:11.255 bw ( KiB/s): min=15104, max=30976, per=25.10%, avg=22182.40, stdev=3306.97, samples=20 00:31:11.255 iops : min= 118, max= 242, avg=173.30, stdev=25.84, samples=20 00:31:11.255 lat (msec) : 10=20.58%, 20=66.05%, 50=0.46%, 100=12.91% 00:31:11.255 cpu : usr=96.69%, sys=2.94%, ctx=13, majf=0, minf=54 00:31:11.255 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 issued rwts: total=1735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.255 filename0: (groupid=0, jobs=1): err= 0: pid=807311: Wed Jul 24 17:56:32 2024 00:31:11.255 read: IOPS=183, BW=22.9MiB/s (24.1MB/s)(230MiB/10044msec) 00:31:11.255 slat (nsec): min=6443, max=24573, avg=11164.82, stdev=2143.50 00:31:11.255 clat (usec): min=5537, max=97930, avg=16312.17, stdev=14064.05 00:31:11.255 lat (usec): min=5545, max=97942, avg=16323.34, stdev=14064.07 00:31:11.255 clat percentiles (usec): 00:31:11.255 | 1.00th=[ 7373], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9634], 00:31:11.255 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:31:11.255 | 70.00th=[12649], 80.00th=[13829], 90.00th=[51119], 95.00th=[53216], 00:31:11.255 | 99.00th=[56361], 99.50th=[57410], 99.90th=[93848], 99.95th=[98042], 00:31:11.255 | 99.99th=[98042] 00:31:11.255 bw ( KiB/s): min=17408, max=29184, per=26.67%, avg=23567.65, stdev=3929.38, samples=20 00:31:11.255 iops : min= 136, max= 228, avg=184.10, stdev=30.67, samples=20 00:31:11.255 lat (msec) : 10=24.74%, 20=62.89%, 50=0.92%, 100=11.45% 00:31:11.255 cpu : usr=96.08%, sys=3.56%, ctx=15, majf=0, minf=114 00:31:11.255 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.255 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.255 00:31:11.255 Run status group 0 (all jobs): 00:31:11.255 READ: bw=86.3MiB/s (90.5MB/s), 21.6MiB/s-41.8MiB/s (22.6MB/s-43.8MB/s), io=867MiB (909MB), run=10044-10050msec 00:31:11.514 17:56:33 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:11.514 17:56:33 -- target/dif.sh@43 -- # local sub 00:31:11.514 17:56:33 -- target/dif.sh@45 -- # for sub in "$@" 00:31:11.514 17:56:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:11.514 17:56:33 -- target/dif.sh@36 -- # local sub_id=0 00:31:11.514 17:56:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.514 17:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.514 17:56:33 -- common/autotest_common.sh@10 -- # set +x 00:31:11.514 17:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.514 17:56:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:11.514 17:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.514 17:56:33 -- common/autotest_common.sh@10 -- # set +x 00:31:11.514 17:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.514 00:31:11.514 real 0m11.077s 00:31:11.514 user 0m35.485s 00:31:11.514 sys 0m1.538s 00:31:11.514 17:56:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.514 17:56:33 -- common/autotest_common.sh@10 -- # set +x 00:31:11.514 ************************************ 00:31:11.514 END TEST fio_dif_digest 00:31:11.514 ************************************ 00:31:11.514 17:56:33 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:11.514 17:56:33 -- target/dif.sh@147 -- # nvmftestfini 00:31:11.514 17:56:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:11.514 17:56:33 -- nvmf/common.sh@116 -- # sync 00:31:11.514 17:56:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:11.514 17:56:33 -- nvmf/common.sh@119 -- # set +e 00:31:11.514 17:56:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:11.514 17:56:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:11.514 rmmod nvme_tcp 00:31:11.514 rmmod nvme_fabrics 00:31:11.773 rmmod nvme_keyring 00:31:11.773 17:56:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:11.773 17:56:33 -- nvmf/common.sh@123 -- # set -e 00:31:11.773 17:56:33 -- nvmf/common.sh@124 -- # return 0 00:31:11.773 17:56:33 -- nvmf/common.sh@477 -- # '[' -n 798572 ']' 00:31:11.773 17:56:33 -- nvmf/common.sh@478 -- # killprocess 798572 00:31:11.773 17:56:33 -- common/autotest_common.sh@926 -- # '[' -z 798572 ']' 00:31:11.773 17:56:33 -- common/autotest_common.sh@930 -- # kill -0 798572 00:31:11.773 17:56:33 -- common/autotest_common.sh@931 -- # uname 00:31:11.773 17:56:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:11.773 17:56:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 798572 00:31:11.773 17:56:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:11.773 17:56:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:11.773 17:56:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 798572' 00:31:11.773 killing process with pid 798572 00:31:11.773 17:56:33 -- common/autotest_common.sh@945 -- # kill 798572 00:31:11.773 17:56:33 -- common/autotest_common.sh@950 -- # wait 798572 00:31:12.031 17:56:33 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:12.031 17:56:33 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:14.562 Waiting for block devices as requested 00:31:14.562 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:14.562 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:14.562 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:14.821 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:14.821 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:14.821 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:14.821 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:15.079 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:15.079 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:15.079 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:15.079 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:15.338 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:15.338 17:56:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:15.338 17:56:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:15.338 17:56:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.338 17:56:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:15.338 17:56:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.338 17:56:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:15.338 17:56:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.876 17:56:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:17.876 00:31:17.876 real 1m12.085s 00:31:17.876 user 7m7.527s 00:31:17.876 sys 0m18.075s 00:31:17.876 17:56:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.876 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.876 ************************************ 00:31:17.876 END TEST nvmf_dif 00:31:17.876 ************************************ 00:31:17.876 17:56:38 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:17.876 17:56:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.876 17:56:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.876 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.876 ************************************ 00:31:17.876 START TEST nvmf_abort_qd_sizes 00:31:17.876 ************************************ 00:31:17.876 17:56:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:17.876 * Looking for test storage... 00:31:17.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.877 17:56:38 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.877 17:56:38 -- nvmf/common.sh@7 -- # uname -s 00:31:17.877 17:56:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.877 17:56:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.877 17:56:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.877 17:56:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.877 17:56:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.877 17:56:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.877 17:56:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.877 17:56:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.877 17:56:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.877 17:56:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.877 17:56:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:17.877 17:56:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:17.877 17:56:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.877 17:56:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.877 17:56:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.877 17:56:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.877 17:56:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.877 17:56:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.877 17:56:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.877 17:56:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.877 17:56:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.877 17:56:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.877 17:56:39 -- paths/export.sh@5 -- # export PATH 00:31:17.877 17:56:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.877 17:56:39 -- nvmf/common.sh@46 -- # : 0 00:31:17.877 17:56:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:17.877 17:56:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:17.877 17:56:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:17.877 17:56:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.877 17:56:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.877 17:56:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:17.877 17:56:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:17.877 17:56:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:17.877 17:56:39 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:17.877 17:56:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:17.877 17:56:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.877 17:56:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:17.877 17:56:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:17.877 17:56:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:17.877 17:56:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.877 17:56:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.877 17:56:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.877 17:56:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:17.877 17:56:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:17.877 17:56:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:17.877 17:56:39 -- common/autotest_common.sh@10 -- # set +x 00:31:23.181 17:56:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:23.181 17:56:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:23.181 17:56:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:23.181 17:56:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:23.181 17:56:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:23.181 17:56:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:23.181 17:56:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:23.181 17:56:44 -- nvmf/common.sh@294 -- # net_devs=() 00:31:23.181 17:56:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:23.181 17:56:44 -- nvmf/common.sh@295 -- # e810=() 00:31:23.181 17:56:44 -- nvmf/common.sh@295 -- # local -ga e810 00:31:23.181 17:56:44 -- nvmf/common.sh@296 -- # x722=() 00:31:23.181 17:56:44 -- nvmf/common.sh@296 -- # local -ga x722 00:31:23.181 17:56:44 -- nvmf/common.sh@297 -- # mlx=() 00:31:23.181 17:56:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:23.181 17:56:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.181 17:56:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:23.181 17:56:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:23.181 17:56:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:23.181 17:56:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:23.181 17:56:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:23.181 17:56:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:23.182 17:56:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:23.182 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:23.182 17:56:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:23.182 17:56:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:23.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:23.182 17:56:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:23.182 17:56:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.182 17:56:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.182 17:56:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:23.182 Found net devices under 0000:86:00.0: cvl_0_0 00:31:23.182 17:56:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.182 17:56:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:23.182 17:56:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.182 17:56:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.182 17:56:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:23.182 Found net devices under 0000:86:00.1: cvl_0_1 00:31:23.182 17:56:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.182 17:56:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:23.182 17:56:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:23.182 17:56:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:23.182 17:56:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.182 17:56:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.182 17:56:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.182 17:56:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:23.182 17:56:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.182 17:56:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.182 17:56:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:23.182 17:56:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.182 17:56:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.182 17:56:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:23.182 17:56:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:23.182 17:56:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.182 17:56:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.182 17:56:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.182 17:56:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.182 17:56:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:23.182 17:56:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.182 17:56:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.182 17:56:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.182 17:56:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:23.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:31:23.182 00:31:23.182 --- 10.0.0.2 ping statistics --- 00:31:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.182 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:31:23.182 17:56:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:31:23.182 00:31:23.182 --- 10.0.0.1 ping statistics --- 00:31:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.182 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:31:23.182 17:56:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.182 17:56:44 -- nvmf/common.sh@410 -- # return 0 00:31:23.182 17:56:44 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:23.182 17:56:44 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:25.715 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:25.715 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:26.650 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:26.650 17:56:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.651 17:56:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:26.651 17:56:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:26.651 17:56:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.651 17:56:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:26.651 17:56:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:26.651 17:56:48 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:31:26.651 17:56:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:26.651 17:56:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:26.651 17:56:48 -- common/autotest_common.sh@10 -- # set +x 00:31:26.651 17:56:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:26.651 17:56:48 -- nvmf/common.sh@469 -- # nvmfpid=815100 00:31:26.651 17:56:48 -- nvmf/common.sh@470 -- # waitforlisten 815100 00:31:26.651 17:56:48 -- common/autotest_common.sh@819 -- # '[' -z 815100 ']' 00:31:26.651 17:56:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.651 17:56:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:26.651 17:56:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.651 17:56:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:26.651 17:56:48 -- common/autotest_common.sh@10 -- # set +x 00:31:26.651 [2024-07-24 17:56:48.068917] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:26.651 [2024-07-24 17:56:48.068964] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.651 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.651 [2024-07-24 17:56:48.127550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.651 [2024-07-24 17:56:48.209373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:26.651 [2024-07-24 17:56:48.209482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.651 [2024-07-24 17:56:48.209490] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.651 [2024-07-24 17:56:48.209496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.651 [2024-07-24 17:56:48.209539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.651 [2024-07-24 17:56:48.209625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.651 [2024-07-24 17:56:48.209733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.651 [2024-07-24 17:56:48.209734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.586 17:56:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:27.586 17:56:48 -- common/autotest_common.sh@852 -- # return 0 00:31:27.586 17:56:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:27.586 17:56:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:27.586 17:56:48 -- common/autotest_common.sh@10 -- # set +x 00:31:27.586 17:56:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:31:27.586 17:56:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:31:27.586 17:56:48 -- scripts/common.sh@312 -- # local nvmes 00:31:27.586 17:56:48 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:31:27.586 17:56:48 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:27.586 17:56:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:31:27.586 17:56:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:27.586 17:56:48 -- scripts/common.sh@322 -- # uname -s 00:31:27.586 17:56:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:31:27.586 17:56:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:31:27.586 17:56:48 -- scripts/common.sh@327 -- # (( 1 )) 00:31:27.586 17:56:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:31:27.586 17:56:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:27.586 17:56:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.586 17:56:48 -- common/autotest_common.sh@10 -- # set +x 00:31:27.586 ************************************ 00:31:27.586 START TEST spdk_target_abort 00:31:27.586 ************************************ 00:31:27.586 17:56:48 -- common/autotest_common.sh@1104 -- # spdk_target 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:27.586 17:56:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:27.586 17:56:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.586 17:56:48 -- common/autotest_common.sh@10 -- # set +x 00:31:30.868 spdk_targetn1 00:31:30.868 17:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.868 17:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.868 17:56:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.868 [2024-07-24 17:56:51.784290] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.868 17:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:31:30.868 17:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.868 17:56:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.868 17:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:31:30.868 17:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.868 17:56:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.868 17:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:31:30.868 17:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.868 17:56:51 -- common/autotest_common.sh@10 -- # set +x 00:31:30.868 [2024-07-24 17:56:51.817246] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.868 17:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.868 17:56:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:30.868 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.153 Initializing NVMe Controllers 00:31:34.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:34.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:34.153 Initialization complete. Launching workers. 00:31:34.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5055, failed: 0 00:31:34.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1675, failed to submit 3380 00:31:34.153 success 853, unsuccess 822, failed 0 00:31:34.153 17:56:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.153 17:56:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:34.153 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.441 Initializing NVMe Controllers 00:31:37.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:37.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:37.441 Initialization complete. Launching workers. 00:31:37.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8580, failed: 0 00:31:37.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1225, failed to submit 7355 00:31:37.441 success 342, unsuccess 883, failed 0 00:31:37.441 17:56:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.441 17:56:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:37.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.978 Initializing NVMe Controllers 00:31:39.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:39.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:39.978 Initialization complete. Launching workers. 00:31:39.978 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32756, failed: 0 00:31:39.978 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2995, failed to submit 29761 00:31:39.978 success 637, unsuccess 2358, failed 0 00:31:39.978 17:57:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:31:39.978 17:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.978 17:57:01 -- common/autotest_common.sh@10 -- # set +x 00:31:39.978 17:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.978 17:57:01 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:39.978 17:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.978 17:57:01 -- common/autotest_common.sh@10 -- # set +x 00:31:41.353 17:57:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.353 17:57:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 815100 00:31:41.353 17:57:02 -- common/autotest_common.sh@926 -- # '[' -z 815100 ']' 00:31:41.353 17:57:02 -- common/autotest_common.sh@930 -- # kill -0 815100 00:31:41.353 17:57:02 -- common/autotest_common.sh@931 -- # uname 00:31:41.353 17:57:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.353 17:57:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 815100 00:31:41.353 17:57:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:41.353 17:57:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:41.353 17:57:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 815100' 00:31:41.353 killing process with pid 815100 00:31:41.353 17:57:02 -- common/autotest_common.sh@945 -- # kill 815100 00:31:41.353 17:57:02 -- common/autotest_common.sh@950 -- # wait 815100 00:31:41.612 00:31:41.612 real 0m14.031s 00:31:41.612 user 0m56.085s 00:31:41.612 sys 0m1.988s 00:31:41.612 17:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.612 17:57:02 -- common/autotest_common.sh@10 -- # set +x 00:31:41.612 ************************************ 00:31:41.612 END TEST spdk_target_abort 00:31:41.612 ************************************ 00:31:41.612 17:57:03 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:31:41.612 17:57:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:41.612 17:57:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:41.612 17:57:03 -- common/autotest_common.sh@10 -- # set +x 00:31:41.612 ************************************ 00:31:41.612 START TEST kernel_target_abort 00:31:41.612 ************************************ 00:31:41.612 17:57:03 -- common/autotest_common.sh@1104 -- # kernel_target 00:31:41.612 17:57:03 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:31:41.612 17:57:03 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:31:41.612 17:57:03 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:31:41.612 17:57:03 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:31:41.612 17:57:03 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:31:41.612 17:57:03 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:41.612 17:57:03 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:41.612 17:57:03 -- nvmf/common.sh@627 -- # local block nvme 00:31:41.612 17:57:03 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:31:41.612 17:57:03 -- nvmf/common.sh@630 -- # modprobe nvmet 00:31:41.612 17:57:03 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:41.612 17:57:03 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:44.143 Waiting for block devices as requested 00:31:44.143 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:44.143 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:44.143 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:44.402 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:44.402 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:44.402 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:44.661 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:44.661 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:44.661 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:44.661 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:44.920 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:44.920 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:44.920 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:45.178 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:45.178 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:45.178 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:45.178 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:45.437 17:57:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:31:45.437 17:57:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:45.437 17:57:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:31:45.437 17:57:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:31:45.437 17:57:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:45.437 No valid GPT data, bailing 00:31:45.437 17:57:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:45.437 17:57:06 -- scripts/common.sh@393 -- # pt= 00:31:45.437 17:57:06 -- scripts/common.sh@394 -- # return 1 00:31:45.437 17:57:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:31:45.437 17:57:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:31:45.437 17:57:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:31:45.437 17:57:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:45.437 17:57:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:45.437 17:57:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:31:45.437 17:57:06 -- nvmf/common.sh@654 -- # echo 1 00:31:45.437 17:57:06 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:31:45.437 17:57:06 -- nvmf/common.sh@656 -- # echo 1 00:31:45.437 17:57:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:31:45.437 17:57:06 -- nvmf/common.sh@663 -- # echo tcp 00:31:45.437 17:57:06 -- nvmf/common.sh@664 -- # echo 4420 00:31:45.437 17:57:06 -- nvmf/common.sh@665 -- # echo ipv4 00:31:45.437 17:57:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:45.437 17:57:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:45.437 00:31:45.437 Discovery Log Number of Records 2, Generation counter 2 00:31:45.437 =====Discovery Log Entry 0====== 00:31:45.437 trtype: tcp 00:31:45.437 adrfam: ipv4 00:31:45.437 subtype: current discovery subsystem 00:31:45.437 treq: not specified, sq flow control disable supported 00:31:45.437 portid: 1 00:31:45.437 trsvcid: 4420 00:31:45.437 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:45.437 traddr: 10.0.0.1 00:31:45.437 eflags: none 00:31:45.437 sectype: none 00:31:45.437 =====Discovery Log Entry 1====== 00:31:45.437 trtype: tcp 00:31:45.437 adrfam: ipv4 00:31:45.437 subtype: nvme subsystem 00:31:45.437 treq: not specified, sq flow control disable supported 00:31:45.437 portid: 1 00:31:45.437 trsvcid: 4420 00:31:45.437 subnqn: kernel_target 00:31:45.437 traddr: 10.0.0.1 00:31:45.437 eflags: none 00:31:45.437 sectype: none 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.437 17:57:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:45.438 17:57:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:45.438 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.803 Initializing NVMe Controllers 00:31:48.803 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:48.803 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:48.803 Initialization complete. Launching workers. 00:31:48.803 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31781, failed: 0 00:31:48.803 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31781, failed to submit 0 00:31:48.803 success 0, unsuccess 31781, failed 0 00:31:48.803 17:57:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:48.803 17:57:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:48.803 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.090 Initializing NVMe Controllers 00:31:52.090 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:52.090 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:52.090 Initialization complete. Launching workers. 00:31:52.090 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66254, failed: 0 00:31:52.090 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16742, failed to submit 49512 00:31:52.090 success 0, unsuccess 16742, failed 0 00:31:52.090 17:57:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.090 17:57:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:31:52.090 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.624 Initializing NVMe Controllers 00:31:54.624 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:31:54.624 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:31:54.624 Initialization complete. Launching workers. 00:31:54.624 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65558, failed: 0 00:31:54.624 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16370, failed to submit 49188 00:31:54.624 success 0, unsuccess 16370, failed 0 00:31:54.624 17:57:16 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:31:54.624 17:57:16 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:31:54.624 17:57:16 -- nvmf/common.sh@677 -- # echo 0 00:31:54.624 17:57:16 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:31:54.624 17:57:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:31:54.624 17:57:16 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:54.624 17:57:16 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:31:54.624 17:57:16 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:31:54.624 17:57:16 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:31:54.624 00:31:54.624 real 0m13.120s 00:31:54.624 user 0m3.472s 00:31:54.624 sys 0m3.716s 00:31:54.624 17:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.624 17:57:16 -- common/autotest_common.sh@10 -- # set +x 00:31:54.624 ************************************ 00:31:54.624 END TEST kernel_target_abort 00:31:54.624 ************************************ 00:31:54.624 17:57:16 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:31:54.624 17:57:16 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:31:54.624 17:57:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:54.624 17:57:16 -- nvmf/common.sh@116 -- # sync 00:31:54.624 17:57:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:54.624 17:57:16 -- nvmf/common.sh@119 -- # set +e 00:31:54.624 17:57:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:54.624 17:57:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:54.624 rmmod nvme_tcp 00:31:54.624 rmmod nvme_fabrics 00:31:54.624 rmmod nvme_keyring 00:31:54.624 17:57:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:54.882 17:57:16 -- nvmf/common.sh@123 -- # set -e 00:31:54.882 17:57:16 -- nvmf/common.sh@124 -- # return 0 00:31:54.882 17:57:16 -- nvmf/common.sh@477 -- # '[' -n 815100 ']' 00:31:54.882 17:57:16 -- nvmf/common.sh@478 -- # killprocess 815100 00:31:54.882 17:57:16 -- common/autotest_common.sh@926 -- # '[' -z 815100 ']' 00:31:54.882 17:57:16 -- common/autotest_common.sh@930 -- # kill -0 815100 00:31:54.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (815100) - No such process 00:31:54.882 17:57:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 815100 is not found' 00:31:54.882 Process with pid 815100 is not found 00:31:54.882 17:57:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:54.882 17:57:16 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:57.414 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:57.414 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:57.414 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:57.672 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:57.672 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:57.673 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:57.673 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:57.673 17:57:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:57.673 17:57:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:57.673 17:57:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.673 17:57:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:57.673 17:57:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.673 17:57:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:57.673 17:57:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.579 17:57:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:59.579 00:31:59.579 real 0m42.259s 00:31:59.579 user 1m3.659s 00:31:59.579 sys 0m13.508s 00:31:59.579 17:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.579 17:57:21 -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 ************************************ 00:31:59.579 END TEST nvmf_abort_qd_sizes 00:31:59.579 ************************************ 00:31:59.838 17:57:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:59.838 17:57:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:59.839 17:57:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:59.839 17:57:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:59.839 17:57:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:59.839 17:57:21 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:31:59.839 17:57:21 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:31:59.839 17:57:21 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:31:59.839 17:57:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:59.839 17:57:21 -- common/autotest_common.sh@10 -- # set +x 00:31:59.839 17:57:21 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:31:59.839 17:57:21 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:31:59.839 17:57:21 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:31:59.839 17:57:21 -- common/autotest_common.sh@10 -- # set +x 00:32:04.027 INFO: APP EXITING 00:32:04.027 INFO: killing all VMs 00:32:04.027 INFO: killing vhost app 00:32:04.027 INFO: EXIT DONE 00:32:06.558 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:06.558 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:06.558 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:06.558 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:06.817 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:10.101 Cleaning 00:32:10.101 Removing: /var/run/dpdk/spdk0/config 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:10.101 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:10.101 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:10.101 Removing: /var/run/dpdk/spdk1/config 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:10.101 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:10.101 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:10.101 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:10.101 Removing: /var/run/dpdk/spdk2/config 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:10.101 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:10.101 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:10.101 Removing: /var/run/dpdk/spdk3/config 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:10.101 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:10.101 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:10.101 Removing: /var/run/dpdk/spdk4/config 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:10.101 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:10.101 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:10.101 Removing: /dev/shm/bdev_svc_trace.1 00:32:10.101 Removing: /dev/shm/nvmf_trace.0 00:32:10.101 Removing: /dev/shm/spdk_tgt_trace.pid424329 00:32:10.101 Removing: /var/run/dpdk/spdk0 00:32:10.101 Removing: /var/run/dpdk/spdk1 00:32:10.101 Removing: /var/run/dpdk/spdk2 00:32:10.101 Removing: /var/run/dpdk/spdk3 00:32:10.101 Removing: /var/run/dpdk/spdk4 00:32:10.101 Removing: /var/run/dpdk/spdk_pid422149 00:32:10.101 Removing: /var/run/dpdk/spdk_pid423254 00:32:10.101 Removing: /var/run/dpdk/spdk_pid424329 00:32:10.101 Removing: /var/run/dpdk/spdk_pid424996 00:32:10.101 Removing: /var/run/dpdk/spdk_pid426523 00:32:10.101 Removing: /var/run/dpdk/spdk_pid427819 00:32:10.101 Removing: /var/run/dpdk/spdk_pid428098 00:32:10.101 Removing: /var/run/dpdk/spdk_pid428390 00:32:10.101 Removing: /var/run/dpdk/spdk_pid428693 00:32:10.101 Removing: /var/run/dpdk/spdk_pid428982 00:32:10.101 Removing: /var/run/dpdk/spdk_pid429233 00:32:10.101 Removing: /var/run/dpdk/spdk_pid429483 00:32:10.101 Removing: /var/run/dpdk/spdk_pid429765 00:32:10.101 Removing: /var/run/dpdk/spdk_pid430772 00:32:10.101 Removing: /var/run/dpdk/spdk_pid434063 00:32:10.101 Removing: /var/run/dpdk/spdk_pid434350 00:32:10.101 Removing: /var/run/dpdk/spdk_pid434702 00:32:10.101 Removing: /var/run/dpdk/spdk_pid434827 00:32:10.101 Removing: /var/run/dpdk/spdk_pid435327 00:32:10.101 Removing: /var/run/dpdk/spdk_pid435454 00:32:10.101 Removing: /var/run/dpdk/spdk_pid435842 00:32:10.101 Removing: /var/run/dpdk/spdk_pid436075 00:32:10.101 Removing: /var/run/dpdk/spdk_pid436334 00:32:10.101 Removing: /var/run/dpdk/spdk_pid436571 00:32:10.101 Removing: /var/run/dpdk/spdk_pid436662 00:32:10.101 Removing: /var/run/dpdk/spdk_pid436848 00:32:10.101 Removing: /var/run/dpdk/spdk_pid437402 00:32:10.101 Removing: /var/run/dpdk/spdk_pid437632 00:32:10.101 Removing: /var/run/dpdk/spdk_pid437943 00:32:10.101 Removing: /var/run/dpdk/spdk_pid438183 00:32:10.101 Removing: /var/run/dpdk/spdk_pid438240 00:32:10.101 Removing: /var/run/dpdk/spdk_pid438296 00:32:10.101 Removing: /var/run/dpdk/spdk_pid438531 00:32:10.101 Removing: /var/run/dpdk/spdk_pid438786 00:32:10.101 Removing: /var/run/dpdk/spdk_pid439018 00:32:10.101 Removing: /var/run/dpdk/spdk_pid439267 00:32:10.101 Removing: /var/run/dpdk/spdk_pid439512 00:32:10.101 Removing: /var/run/dpdk/spdk_pid439759 00:32:10.101 Removing: /var/run/dpdk/spdk_pid439996 00:32:10.101 Removing: /var/run/dpdk/spdk_pid440252 00:32:10.101 Removing: /var/run/dpdk/spdk_pid440486 00:32:10.101 Removing: /var/run/dpdk/spdk_pid440733 00:32:10.101 Removing: /var/run/dpdk/spdk_pid440973 00:32:10.101 Removing: /var/run/dpdk/spdk_pid441223 00:32:10.101 Removing: /var/run/dpdk/spdk_pid441457 00:32:10.101 Removing: /var/run/dpdk/spdk_pid441710 00:32:10.101 Removing: /var/run/dpdk/spdk_pid441946 00:32:10.101 Removing: /var/run/dpdk/spdk_pid442219 00:32:10.101 Removing: /var/run/dpdk/spdk_pid442451 00:32:10.101 Removing: /var/run/dpdk/spdk_pid442700 00:32:10.101 Removing: /var/run/dpdk/spdk_pid442940 00:32:10.101 Removing: /var/run/dpdk/spdk_pid443191 00:32:10.101 Removing: /var/run/dpdk/spdk_pid443423 00:32:10.101 Removing: /var/run/dpdk/spdk_pid443678 00:32:10.101 Removing: /var/run/dpdk/spdk_pid443914 00:32:10.101 Removing: /var/run/dpdk/spdk_pid444163 00:32:10.101 Removing: /var/run/dpdk/spdk_pid444405 00:32:10.101 Removing: /var/run/dpdk/spdk_pid444652 00:32:10.101 Removing: /var/run/dpdk/spdk_pid444890 00:32:10.101 Removing: /var/run/dpdk/spdk_pid445143 00:32:10.101 Removing: /var/run/dpdk/spdk_pid445377 00:32:10.101 Removing: /var/run/dpdk/spdk_pid445626 00:32:10.101 Removing: /var/run/dpdk/spdk_pid445867 00:32:10.101 Removing: /var/run/dpdk/spdk_pid446114 00:32:10.101 Removing: /var/run/dpdk/spdk_pid446353 00:32:10.101 Removing: /var/run/dpdk/spdk_pid446612 00:32:10.101 Removing: /var/run/dpdk/spdk_pid446849 00:32:10.101 Removing: /var/run/dpdk/spdk_pid447103 00:32:10.101 Removing: /var/run/dpdk/spdk_pid447341 00:32:10.101 Removing: /var/run/dpdk/spdk_pid447594 00:32:10.101 Removing: /var/run/dpdk/spdk_pid447826 00:32:10.101 Removing: /var/run/dpdk/spdk_pid448082 00:32:10.101 Removing: /var/run/dpdk/spdk_pid448143 00:32:10.101 Removing: /var/run/dpdk/spdk_pid448523 00:32:10.101 Removing: /var/run/dpdk/spdk_pid452328 00:32:10.101 Removing: /var/run/dpdk/spdk_pid533573 00:32:10.101 Removing: /var/run/dpdk/spdk_pid537706 00:32:10.101 Removing: /var/run/dpdk/spdk_pid547834 00:32:10.101 Removing: /var/run/dpdk/spdk_pid553393 00:32:10.101 Removing: /var/run/dpdk/spdk_pid558054 00:32:10.101 Removing: /var/run/dpdk/spdk_pid558600 00:32:10.101 Removing: /var/run/dpdk/spdk_pid567103 00:32:10.101 Removing: /var/run/dpdk/spdk_pid567364 00:32:10.101 Removing: /var/run/dpdk/spdk_pid571652 00:32:10.101 Removing: /var/run/dpdk/spdk_pid577344 00:32:10.101 Removing: /var/run/dpdk/spdk_pid579975 00:32:10.101 Removing: /var/run/dpdk/spdk_pid590264 00:32:10.101 Removing: /var/run/dpdk/spdk_pid599216 00:32:10.101 Removing: /var/run/dpdk/spdk_pid601022 00:32:10.101 Removing: /var/run/dpdk/spdk_pid602014 00:32:10.101 Removing: /var/run/dpdk/spdk_pid619044 00:32:10.102 Removing: /var/run/dpdk/spdk_pid622835 00:32:10.102 Removing: /var/run/dpdk/spdk_pid627162 00:32:10.102 Removing: /var/run/dpdk/spdk_pid628947 00:32:10.102 Removing: /var/run/dpdk/spdk_pid630881 00:32:10.102 Removing: /var/run/dpdk/spdk_pid631120 00:32:10.102 Removing: /var/run/dpdk/spdk_pid631281 00:32:10.102 Removing: /var/run/dpdk/spdk_pid631456 00:32:10.102 Removing: /var/run/dpdk/spdk_pid632130 00:32:10.102 Removing: /var/run/dpdk/spdk_pid633997 00:32:10.102 Removing: /var/run/dpdk/spdk_pid635007 00:32:10.102 Removing: /var/run/dpdk/spdk_pid635518 00:32:10.102 Removing: /var/run/dpdk/spdk_pid640982 00:32:10.102 Removing: /var/run/dpdk/spdk_pid646624 00:32:10.102 Removing: /var/run/dpdk/spdk_pid652018 00:32:10.102 Removing: /var/run/dpdk/spdk_pid688342 00:32:10.102 Removing: /var/run/dpdk/spdk_pid692722 00:32:10.102 Removing: /var/run/dpdk/spdk_pid698771 00:32:10.102 Removing: /var/run/dpdk/spdk_pid700111 00:32:10.102 Removing: /var/run/dpdk/spdk_pid701666 00:32:10.102 Removing: /var/run/dpdk/spdk_pid705996 00:32:10.102 Removing: /var/run/dpdk/spdk_pid710045 00:32:10.102 Removing: /var/run/dpdk/spdk_pid717437 00:32:10.102 Removing: /var/run/dpdk/spdk_pid717449 00:32:10.102 Removing: /var/run/dpdk/spdk_pid722190 00:32:10.102 Removing: /var/run/dpdk/spdk_pid722327 00:32:10.102 Removing: /var/run/dpdk/spdk_pid722451 00:32:10.102 Removing: /var/run/dpdk/spdk_pid722905 00:32:10.102 Removing: /var/run/dpdk/spdk_pid722922 00:32:10.102 Removing: /var/run/dpdk/spdk_pid724331 00:32:10.102 Removing: /var/run/dpdk/spdk_pid726027 00:32:10.360 Removing: /var/run/dpdk/spdk_pid727784 00:32:10.360 Removing: /var/run/dpdk/spdk_pid729437 00:32:10.360 Removing: /var/run/dpdk/spdk_pid731063 00:32:10.360 Removing: /var/run/dpdk/spdk_pid732833 00:32:10.360 Removing: /var/run/dpdk/spdk_pid739133 00:32:10.360 Removing: /var/run/dpdk/spdk_pid739715 00:32:10.360 Removing: /var/run/dpdk/spdk_pid741489 00:32:10.360 Removing: /var/run/dpdk/spdk_pid742542 00:32:10.360 Removing: /var/run/dpdk/spdk_pid748304 00:32:10.360 Removing: /var/run/dpdk/spdk_pid751124 00:32:10.360 Removing: /var/run/dpdk/spdk_pid756569 00:32:10.360 Removing: /var/run/dpdk/spdk_pid762174 00:32:10.360 Removing: /var/run/dpdk/spdk_pid768138 00:32:10.360 Removing: /var/run/dpdk/spdk_pid768675 00:32:10.360 Removing: /var/run/dpdk/spdk_pid769332 00:32:10.360 Removing: /var/run/dpdk/spdk_pid770039 00:32:10.360 Removing: /var/run/dpdk/spdk_pid770918 00:32:10.360 Removing: /var/run/dpdk/spdk_pid771515 00:32:10.360 Removing: /var/run/dpdk/spdk_pid772223 00:32:10.360 Removing: /var/run/dpdk/spdk_pid772873 00:32:10.360 Removing: /var/run/dpdk/spdk_pid777552 00:32:10.360 Removing: /var/run/dpdk/spdk_pid777852 00:32:10.360 Removing: /var/run/dpdk/spdk_pid783799 00:32:10.360 Removing: /var/run/dpdk/spdk_pid783907 00:32:10.360 Removing: /var/run/dpdk/spdk_pid786160 00:32:10.360 Removing: /var/run/dpdk/spdk_pid793760 00:32:10.360 Removing: /var/run/dpdk/spdk_pid793802 00:32:10.360 Removing: /var/run/dpdk/spdk_pid798834 00:32:10.360 Removing: /var/run/dpdk/spdk_pid800830 00:32:10.360 Removing: /var/run/dpdk/spdk_pid802823 00:32:10.360 Removing: /var/run/dpdk/spdk_pid803898 00:32:10.360 Removing: /var/run/dpdk/spdk_pid805909 00:32:10.360 Removing: /var/run/dpdk/spdk_pid806985 00:32:10.360 Removing: /var/run/dpdk/spdk_pid815816 00:32:10.360 Removing: /var/run/dpdk/spdk_pid816283 00:32:10.360 Removing: /var/run/dpdk/spdk_pid816761 00:32:10.360 Removing: /var/run/dpdk/spdk_pid819180 00:32:10.360 Removing: /var/run/dpdk/spdk_pid820037 00:32:10.360 Removing: /var/run/dpdk/spdk_pid820517 00:32:10.360 Clean 00:32:10.360 killing process with pid 377598 00:32:18.543 killing process with pid 377595 00:32:18.543 killing process with pid 377597 00:32:18.543 killing process with pid 377596 00:32:18.543 17:57:39 -- common/autotest_common.sh@1436 -- # return 0 00:32:18.543 17:57:39 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:18.543 17:57:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:18.543 17:57:39 -- common/autotest_common.sh@10 -- # set +x 00:32:18.543 17:57:39 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:18.543 17:57:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:18.543 17:57:39 -- common/autotest_common.sh@10 -- # set +x 00:32:18.543 17:57:39 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:18.543 17:57:39 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:18.543 17:57:39 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:18.543 17:57:39 -- spdk/autotest.sh@394 -- # hash lcov 00:32:18.543 17:57:39 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:18.543 17:57:39 -- spdk/autotest.sh@396 -- # hostname 00:32:18.543 17:57:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:18.543 geninfo: WARNING: invalid characters removed from testname! 00:32:36.629 17:57:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:38.530 17:58:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:40.429 17:58:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:42.328 17:58:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:43.704 17:58:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:45.605 17:58:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:46.983 17:58:08 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:46.983 17:58:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.983 17:58:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:46.983 17:58:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.983 17:58:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.983 17:58:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.983 17:58:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.983 17:58:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.983 17:58:08 -- paths/export.sh@5 -- $ export PATH 00:32:46.983 17:58:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.983 17:58:08 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:46.983 17:58:08 -- common/autobuild_common.sh@438 -- $ date +%s 00:32:46.983 17:58:08 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721836688.XXXXXX 00:32:46.983 17:58:08 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721836688.TWzgLO 00:32:46.983 17:58:08 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:32:46.983 17:58:08 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:32:46.983 17:58:08 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:46.983 17:58:08 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:46.983 17:58:08 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:46.983 17:58:08 -- common/autobuild_common.sh@454 -- $ get_config_params 00:32:46.983 17:58:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:32:46.983 17:58:08 -- common/autotest_common.sh@10 -- $ set +x 00:32:46.983 17:58:08 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:32:46.983 17:58:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:46.983 17:58:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:46.983 17:58:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:46.983 17:58:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:46.983 17:58:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:46.983 17:58:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:46.983 17:58:08 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:46.983 17:58:08 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:46.983 17:58:08 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:46.983 17:58:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:46.983 + [[ -n 334762 ]] 00:32:46.983 + sudo kill 334762 00:32:46.993 [Pipeline] } 00:32:47.013 [Pipeline] // stage 00:32:47.020 [Pipeline] } 00:32:47.038 [Pipeline] // timeout 00:32:47.043 [Pipeline] } 00:32:47.062 [Pipeline] // catchError 00:32:47.068 [Pipeline] } 00:32:47.086 [Pipeline] // wrap 00:32:47.093 [Pipeline] } 00:32:47.106 [Pipeline] // catchError 00:32:47.117 [Pipeline] stage 00:32:47.120 [Pipeline] { (Epilogue) 00:32:47.136 [Pipeline] catchError 00:32:47.138 [Pipeline] { 00:32:47.152 [Pipeline] echo 00:32:47.153 Cleanup processes 00:32:47.159 [Pipeline] sh 00:32:47.443 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.443 833427 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.458 [Pipeline] sh 00:32:47.744 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.744 ++ grep -v 'sudo pgrep' 00:32:47.744 ++ awk '{print $1}' 00:32:47.744 + sudo kill -9 00:32:47.744 + true 00:32:47.756 [Pipeline] sh 00:32:48.041 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:00.301 [Pipeline] sh 00:33:00.588 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:00.588 Artifacts sizes are good 00:33:00.604 [Pipeline] archiveArtifacts 00:33:00.612 Archiving artifacts 00:33:00.812 [Pipeline] sh 00:33:01.097 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:01.113 [Pipeline] cleanWs 00:33:01.124 [WS-CLEANUP] Deleting project workspace... 00:33:01.124 [WS-CLEANUP] Deferred wipeout is used... 00:33:01.132 [WS-CLEANUP] done 00:33:01.135 [Pipeline] } 00:33:01.157 [Pipeline] // catchError 00:33:01.170 [Pipeline] sh 00:33:01.452 + logger -p user.info -t JENKINS-CI 00:33:01.461 [Pipeline] } 00:33:01.477 [Pipeline] // stage 00:33:01.483 [Pipeline] } 00:33:01.500 [Pipeline] // node 00:33:01.507 [Pipeline] End of Pipeline 00:33:01.544 Finished: SUCCESS